Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Regardless of accusations of anthropomorphizing, continual thinking seems to be a precursor to any sense of agency, simply because agency requires something to be running.

Eventually LLM output degrades when most of the context is its own output. So should there also be an input stream of experience? The proverbial "staring out the window", fed into the model to keep it grounded and give hooks to go off?



Though it kind of reminds me of The Shining (too much time just thinking drives one to insanity). It seems like we need to evolve intelligence, perception, and agency very closely in tandem, or an imbalance in any will send it off the rails.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: