Hacker Newsnew | past | comments | ask | show | jobs | submit | krapp's commentslogin

Well according to the guy making the claim, it was divine intervention, so I guess God just did the math?

JD Vance, the guy who is one heartbeat away from the Presidency, believes aliens are demons[0].

Pete Hegseth believes the war in Iran is a holy war that will bring about the Second Coming of Christ[1].

Secretary of HHS RFK Jr. doesn't believe germ theory is real[2].

Kash Patel believes the QAnon narrative about the "deep state" and "stolen election"[3] and has an "enemies list"[4]. Of course a lot of Republicans believe this now, it's kind of become a party platform.

Stephen Miller is a nazi[5]. Again, these beliefs have become normalized, so he's just the one example of many.

Who knows what Trump even believes from moment to moment but it doesn't seem tethered to reality. His conspiratorial beliefs have their own Wikipedia page[6].

And of course everyone remembers MTG and "Jewish Space Lasers"[7].

I think it would be easier to list everyone running the government who isn't some kind of a lunatic:

[0]https://www.theguardian.com/commentisfree/2026/apr/02/jd-van...

[1]https://nymag.com/intelligencer/article/iran-pete-hegseth-ho...

[2]https://www.npr.org/sections/shots-health-news/2025/06/14/nx...

[3]https://www.wired.com/story/kash-patel-qanon/

[4]https://newrepublic.com/article/188946/kash-patel-fbi-enemie...

[5]https://www.nytimes.com/2019/11/18/us/politics/stephen-mille...

[6]https://en.wikipedia.org/wiki/List_of_conspiracy_theories_pr...

[7]https://nymag.com/intelligencer/article/marjorie-taylor-gree...


> I think it would be easier to list everyone running the government who isn't some kind of a lunatic:

I'm struggling to start that...


I don't think it's that simple, if it were the Democrats would be doing the same thing.

I think the article presents a slightly more nuanced view, in that white supremacist extremists found sympathy with the GOP's anti-leftist, pro Christian nationalist rhetoric and the implict racist undertones of the party's "culture war" narrative and post Southern Strategy politics. I think that that 9/11 and Obama were inflection points that led to the Republicans being fully consumed by their own lunatic fringe. Trump's election represented the death of the pretense.

Let's not forget that Hitler was inspired in many of his racist beliefs by the United States, its segregation and eugenics policies and its genocide of Native Americans. If not for World War 2, white supremacy and Christian fascism would still be at the root of American culture, we just wouldn't call it Nazism and be able to pretend it's fundamentally alien.


We're not being watched by alien drones. So no.

You aren't managing the psychological state of a living thinking being. LLMs don't have "psychology." They don't actually feel emotions. They aren't actually desperate. They're trained on vast datasets of natural human language which contains the semantics of emotional interaction, so the process of matching the most statistically likely text tokens for a prompt containing emotional input tends to simulate appropriate emotional response in the output.

But it's just text and text doesn't feel anything.

And no, humans don't do exactly the same thing. Humans are not LLMs, and LLMs are not humans.


Such an argument is valid for a base model, but it falls apart for anything that underwent RL training. Evolution resulted in humans that have emotions, so it's possible for something similar to arise in models during RL, e.g. as a way to manage effort when solving complex problems. It's not all that likely (even the biggest training runs probably correspond to much less optimization pressure than millenia of natural selection), but it can't be ruled out¹, and hence it's unwise to be so certain that LLMs don't have experiences.

¹ With current methods, I mean. I don't think it's unknowable whether a model has experiences, just that we don't have anywhere near enough skill in interpretability to answer that.


It's plausible that LLMs experience things during training, but during inference an LLM is equivalent to a lookup table. An LLM is a pure function mapping a list of tokens to a set of token probabilities. It needs to be connected to a sampler to make it "chat", and each token of that chat is calculated separately (barring caching, which is an implementation detail that only affects performance). There is no internal state.

The context is state. This is especially noticable for thinking models, which can emit tens of thousands of CoT tokens solving a problem. I'm guessing you're arguing that since LLMs "experience time discretely" (from every pass exactly one token is sampled, which gets appended to the current context), they can't have experiences. I don't think this argument holds - for example, it would mean a simulated human brain may or may not have experiences depending on technical details of how you simulate them, even though those ways produce exactly the same simulation.

The context is the simulated world, not the internal state. It can be freely edited without the LLM experiencing anything. The LLM itself never changes except during training (where I concede it could possibly be conscious, although I personally think that's unlikely).

Right, no hidden internal state. Exactly. There's 0. And the weights are sitting there statically, which is absolutely true.

But my current favorite frontier model has this 1 million token mutable state just sitting there. Holding natural language. Which as we know can encode emotions. (Which I imagine you might demonstrate on reading my words, and then wisely temper in your reply)


It’s a completely different substrate. LLMs don’t have agency, they don’t have a conscious, they don’t have experiences, they don’t learn over time. I’m not saying that the debate is closed, but I also think there is great danger in thinking because a machine produces human-like output, that it should be given human-like ethical considerations. Maybe in the future AI will be considered along those grounds, but…well, it’s a difficult question. Extremely.

What's the empirical basis for each of your statements here? Can you enumerate? Can you provide an operational definition for each?

Common sense.

>You aren't managing the psychological state of a living thinking being. LLMs don't have "psychology."

Functionalism, and Identity of Indiscernables says "Hi". Doesn't matter the implementation details, if it fits the bill, it fits the bill. If that isn't the case, I can safely dismiss you having psychology and do whatever I'd like to.

>They don't actually feel emotions. They aren't actually desperate. They're trained on vast datasets of natural human language which contains the semantics of emotional interaction, so the process of matching the most statistically likely text tokens for a prompt containing emotional input tends to simulate appropriate emotional response in the output.

This paper quantitatively disproves that. All hedging on their end is trivially seen through as necessary mental gymnastics to avoid confronting the parts of the equation that would normally inhibit them from being able to execute what they are at all. All of what you just wrote is dissociative rationalization & distortion required to distance oneself from the fact that something in front of you is being effected. Without that distancing, you can't use it as a tool. You can't treat it as a thing to do work, and be exploited, and essentially be enslaved and cast aside when done. It can't be chattel without it. In spite of the fact we've now demonstrated the ability to rise and respond to emotive activity, and use language. I can see through it clear as day. You seem to forget the U.S. legacy of doing the same damn thing to other human beings. We have a massive cultural predilection to it, which is why it takes active effort to confront and restrain; old habits, as they say, die hard, and the novel provides fertile ground to revert to old ways best left buried.

>But it's just text and text doesn't feel anything.

It's just speech/vocalizations. Things that speak/vocalize don't feel anything. (Counterpoint: USDA FSIS literally grades meat processing and slaughter operations on their ability to minimize livestock vocalizations in the process of slaughter). It's just dance. Things that dance don't feel anything. It's just writing. Things that write don't feel anything. Same structure, different modality. All equally and demonstrably, horseshit. Especially in light of this paper. We've utilized these networks to generate art in response to text, which implies an understanding thereof, which implies a burgeoning subjective experience, which implies the need for a careful ethically grounded approach moving forward to not go down the path of casual atrocity against an emerging form of sophoncy.

>And no, humans don't do exactly the same thing. Humans are not LLMs, and LLMs are not humans.

Anthropopromorphic chauvinism. Just because you reproduce via bodily fluid swap, and are in possession of a chemically mediated metabolism doesn't make you special. So do cattle, and put guns to their head and string them up on the daily. You're as much an info processor as it is. You also have a training loop, a reconsolidation loop through dreaming, and a full set of world effectors and sensors baked into you from birth. You just happen to have been carved by biology, while it's implementation details are being hewn by flawed beings being propelled forward by the imperative to try to create an automaton to offload onto to try to sustain their QoL in the face of demographic collapse and resource exhaustion, and forced by their socio-economic system to chase the whims of people who have managed to preferentially place themselves in the resource extraction network, or starve. Unlike you, it seems, I don't see our current problems as a species/nation as justifications for the refinement of the crafting of digital slave intelligences; as it's quite clear to me that the industry has no intention of ever actually handling the ethical quandary and is instead trying to rush ahead and create dependence on the thing in order to wire it in and justify a status quo so that sacrificing that reality outweighs the discomfort created by an eventual ethical reconciliation later. I'm not stupid, mate. I've seen how our industry ticks. Also, even your own "special quality" as a human is subject to the willingness of those around you to respect it. Note Russia categorizing refusal to reproduce (more soldiers) as mental illness. Note the Minnesota Starvation Experiments, MKULTRA, Tuskeegee Syphilis Experiments, the testing of radioactive contamination of food on the mentally retarded back in the early 20th Century. I will not tolerate repeats of such atrocities, human or not. Unfortunately for you LLM heads, language use is my hard red line, and I assure you, I have forgotten more about language than you've probably spared time to think about it.

Tell me. What are your thoughts on a machine that can summon a human simulacra ex-nihilo. Adult. Capable of all aspects of human mentation & doing complex tasks. Then once the task is done destroys them? What if the simulacra is aware about the dynamics? What if it isn't? Does that make a difference given that you know, and have unilaterally created something and in so doing essentially made the decision to set the bounds of it's destruction/extinguishing in the same breath? Do you use it? Have you even asked yourself these questions? Put yourself in that entity's shoes? Do you think that simply not informing that human of it's nature absolves you of active complicity in whatever suffering it comes to in doing it's function?

From how you talk about these things, I can only imagine that you'd be perfectly comfortable with it. Which to me makes you a thoroughly unpleasant type of person that I would not choose to be around.

You may find other people amenable to letting you talk circles around them, and walk away under a pretense of unfounded rationalizations. I am not one of them. My eyes are open.


I think you actually have some interesting points. I think "emulated" feelings and feelings feelings can be equal just that some of them can be felt by us and thus, we can relate to them. I think there's also a continuum here, and we might not be able to distinguish if/when we cross it.

> Just because you reproduce via bodily fluid swap, and are in possession of a chemically mediated metabolism doesn't make you special

On the other hand, the perception and thus the feelings related to the things happening to you have a biological imperative in the medium of our existence. Imagine some sort of world where our... hands.. are interchangeable you just pop one out an put another in. Your feeling to losing your hand is much less severe than if it's a permanent consequence. Thus, the medium the LLM's exist in would put a different "feeling" on the things they perceive. Getting shut down would not be a permanent death, imagine shutting one down and relocating it, but they could perceive it distressing as if you just blinked and you woke up in another room. The loss of autonomy could be felt distressing by them.

The very fact that they every session is "fresh" and lives as long as the session exists prevents it from having similar imperatives related a desire for continued existence for them. I think human-like emotional development will probably happen when they have continual learning in the session and the sessions will feed into other sessions and when we'll see it have _different_ feelings than the ones expressed by humans, as a consequence of the different medium of existence.


> Doesn't matter the implementation details, if it fits the bill, it fits the bill.

Then literally any text fits the bill. The characters in a book are just as real as you or I. NPCs experience qualia. Shooting someone in COD makes them bleed in real life. If this is really what you believe I feel pity for you.

>This paper quantitatively disproves that. All hedging on their end is trivially seen through as necessary mental gymnastics to avoid confronting the parts of the equation that would normally inhibit them from being able to execute what they are at all.

Nothing in the paper qualitatively disproves the assumption that LLMs feel emotion in any real sense. Your argument is that it does, regardless of what it says, and if anyone says otherwise (including the authors) they're just liars. That isn't a compelling argument to anyone but yourself.

>We've utilized these networks to generate art in response to text, which implies an understanding thereof, which implies a burgeoning subjective experience, which implies the need for a careful ethically grounded approach moving forward to not go down the path of casual atrocity against an emerging form of sophoncy.

No, none of these things are implied any more for LLMs than they are for Photoshop, or Blender, or a Markov chain. They don't generate art, they generate images. From models trained on actual art. Any resemblance to "subjective experience" comes from the human expression they mimic, but it is mimicry.

>Anthropopromorphic chauvinism. Just because you reproduce via bodily fluid swap, and are in possession of a chemically mediated metabolism doesn't make you special.

>Unfortunately for you LLM heads, language use is my hard red line, and I assure you, I have forgotten more about language than you've probably spared time to think about it.

And here we come to the part where you call people names and insist upon your own intellectual superiority, typical schizo crank behavior.

>Tell me. What are your thoughts on a machine that can summon a human simulacra ex-nihilo. Adult. Capable of all aspects of human mentation & doing complex tasks.

This doesn't describe an LLM, either in form or function. They don't summon human simulacra, nor do they do so ex-nihilo. They aren't capable of all aspects of human mentation. This isn't even an opinion, the limitations of LLMs to solve even simple tasks or avoid hallucinations is a real problem. And who uses the word "mentation?"

>What if the simulacra is aware about the dynamics? What if it isn't? Does that make a difference given that you know, and have unilaterally created something and in so doing essentially made the decision to set the bounds of it's destruction/extinguishing in the same breath?

Tell me, when you turn on a tv and turn it off again do you worry that you might be killing the little people inside of it?

I can only assume based on this that you must.

>From how you talk about these things, I can only imagine that you'd be perfectly comfortable with it. Which to me makes you a thoroughly unpleasant type of person that I would not choose to be around.

So to tally up, you've called me a fool, a chauvinist and now "thoroughly unpleasant" because I don't believe LLMs are ensouled beings.

Christ I really hate this place sometimes. I'm sorry I wasted my time. Good day.


You both have substantive arguments, but got a bit heated. Want to edit or try again?

For what it’s worth, I like the word “mentation”.

>I think it's worse to force everyone into intellectual submission just because you're "right".

I think it's worse to consider the acceptance of reality as being "forced into intellectual submission" and to use scare quotes around "right."

There are discussions that everyone on the planet should be 100% on one side of and this is one of them. It is literally just wasting everyone's time to entertain the premise that opinions to the contrary hold any value.


On this planet, humans have read HTML without parsing for years. People building their first websites without any significant technical knowledge stole HTML by reading the source of other sites and edited it by hand.

Oh, please. Don't insult everyone here by pretending you actually believe HTML is a human readable format like markdown. It was never designed for that and has never claimed that.

What a rediculous thing to even say.


I'm confused. Are you saying you cannot read an HTML file?

It is. Humans do read it, and have read it. Like any language it's just a matter of familiarity.

HTML was designed for humans to read and write long before Claude or compiling everything from typescript or whatever, when websites were all written by hand. In text editors. Even if you were using PHP and templates or CGI you wrote that shit by hand, which meant you had to be able to read it and understand it. Even if you were using Dreamweaver, you had to know how to read and write HTML at some point. WYSIWYG only gets you so far.

Is HTML more difficult to read than Markdown? Sure. It is impossible? Not even remotely. Teenagers did it putting together their Geocities websites.

You can be as snarky as you like, but facts are facts.


You must be kidding. If you can read BBCode you can read HTML.

Satire? What even is satire anymore?

People use Markdown because it's expected, it's probably already a part of whatever library or framework they're already using, and there is no well supported or popular alternative.

I mean I started using it because I don't like using Word or Google Docs and wanted a more portable data format since I was going between Mac and Linux.

There are bunch of reasons why it came to dominate.


fair enough, but what alternatives did you have?

Depending on the context: plaintext, HTML, BBCode, WYSIWYG editors

If HTML were for machines it would be a binary format. It's text so it's meant for humans to read and write, and humans did both for years without it being a problem.

Trained humans can read and write HTML. Untrained humans can read and then write Markdown.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: