Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The difference is that the human can and did find its own food for literally ages. That's already a very, very important difference. And while we cannot really define what's conscious, it's a bit easier (still with some edge cases) to define what is alive. And probably what is alive has some degree of consciousness. An LLM definitely does not.


One of the "barriers" to me is that (AFAIK) an LLM/agent/whatever doesn't operate without you hitting the equivalent of an on switch.

It does not think idle thoughts while it's not being asked questions. It's not ruminating over its past responses after having replied. It's just off until the next prompt.

Side note: whatever future we get where LLMs get their own food is probably not one I want a part of. I've seen the movies.


This barrier is trivial to solve even today. It is not hard to put an LLM on an infinite loop of self-prompting.


A self-prompting loop still seems artificial to me. It only exists because you force it to externally.


You only exist because you were forced to be birthed externally? Everything has a beginning.

In fact, what is artificial is stopping the generation of an LLM when it reaches a 'stop token'.

A more natural barrier is the attention size, but with 2 million tokens, LLMs can think for a long time without losing any context. And you can take over with memory tools for longer horizon tasks.


Good points. :) Thank you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: