Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"AGI needs to update beliefs when contradicted by new evidence" is a great idea, however, the article's approach of building better memory databases (basically fancier RAG) doesn't seem enable this. Beliefs and facts are built into LLMs at a very low layer during training. I wonder how they think they can force an LLM to pull from the memory bank instead of the training data.


LLMs are not the proposed solution.

(Also, LLMs don't have beliefs or other mental states. As for facts, it's trivially easy to get an LLM to say that it was previously wrong ... but multiple contradictory claims cannot all be facts.)


> how they think they can force an LLM to pull from the memory bank instead of the training data

You have to implement procedurality first (e.g. counting, after proper instancing of ideas).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: