Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That is just not in evidence.

Seems like its nailing it to me. You ask about a scenario and it gives an appropriate answer.

We have evidence that LLMs build models of the things they are learning about. Have a look at this paper:

Do Large Language Models learn world models or just surface statistics?

https://thegradient.pub/othello/

previously discussed https://news.ycombinator.com/item?id=34474043



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: