Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The phenomenon of waking up before an especially important alarm speaks against the notion that our cognition ‘stops’ in anything like the same way that an LLM is stopped when not actively predicting the next tokens in an output stream.


Folks are missing the point, so let me offer some clarification.

The silly example I provided in this thread is poking fun at the notion that LLMs can't be sentient because they aren't processing data all the time. Just because an agent isn't sentient for some period of time it doesn't mean it can't be sentient the rest of the time. Picture somebody who wakes up from a deep coma, rather than sleeping, if that works better for you.

I am not saying that LLMs are sentient, either. I am only showing that an argument based on the intermittency of their data processing is weak.


Granted.

Although, setting aside the question of sentience, there’s a more serious point I’d make about the dissimilarity between the always-on nature of human cognition, versus the episodic activation of an LLM in next-token prediction—namely, I suspect these current model architectures lack a fundamental element of what makes us generally intelligent, that we are constantly building mental models of how the world works, which we refine and probe through our actions (and indeed, we integrate the outcomes of those actions into our models as we sleep).

Whether a toddler discovering kinematics through throwing their toys around, or adolescents grasping social dynamics through testing and breaking of boundaries, this learning loop is fundamental to how we even have concepts that we can signify with language in the first place.

LLMs operate in the domain of signifiers that we humans have created, with no experiential or operational ground truth in what was signified, and a corresponding lack of grounding in the world models behind those concepts.

Nowhere is this more evident than in the inability of coding agents to adhere to a coherent model of computation in what they produce; never mind a model of the complex human-computer interactions in the resulting software systems.


They’re not missing the point, you have a very imprecise understanding of human biology and it led you to a hamfisted metaphor that is empirically too leaky to be of any use.

Even when you tried to correct it, it doesn’t work, because a body in a coma is still running thousands of processes and responds to external stimuli.


I suggest reading the thread again to aid in understanding. My argument has precisely nothing to do with human biology, and everything to do with "pauses in data processing do not make sentience impossible".

Unless you are seriously arguing that people could not be sentient while awake if they became non-sentient while they are sleeping/unconscious/in a coma. I didn't address that angle because it seemed contrary to the spirit of steel-manning [0].

[0] https://news.ycombinator.com/newsguidelines.html


If you cut someone who is in a deep coma, they will respond to that stimuli by sending platelets and white blood cells. There is data and it is being received, processed, and responded to.

Again, your poor understanding of biology and reductive definition of "data" is leading you to double down on an untenable position. You are now arguing for a pure abstraction that can have no relationship to human biology since your definition of "pause" is incompatible not only with human life, but even with accurately describing a human body minutes and hours after death.

This could be an interesting topic for science fiction or xenobiology, but is worse than useless as a metaphor.


> There is data and it is being received, processed, and responded to.

And that is orthogonal to this thread. The argument to which I originally replied is this:

>>> For a current LLM time just "stops" when waiting from one prompt to the next. That very much prevents it from being proactive: you can't tell it to remind you of something in 5 minutes without an external agentic architecture. I don't think it is possible for an AI to achieve sentience without this either.

Summarizing, this user is doesn't believe that an an agent can achieve sentience if the agent processes data intermittently. Do you agree that is a fair summary?

Now, do you believe that it's a reasonable argument to make? Because if you agree with it then you believe that humans would not be sentient if they processed stimuli intermittently. Whether humans actually process sensory stimuli intermittently or not does not even matter in this discussion, a point that has still not stuck, apparently.

I am sorry if the way I have presented this argument from the beginning was not clear enough. It remains unchanged through the whole thread, so if you perceive it to be moving goalposts it just means either I didn't present it clearly enough or people have been unable to understand it for some other reason. Perhaps asking a non-sentient AI to explain it more clearly could be of help.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: