Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

... and any other answer is just special pleading towards what people want to be true. "What LLMs can't do" is increasingly "God of the gaps" -- someone states what they believe to be a fundamental limitation, and then later models show that limitation doesn't hold. Maybe there are some, maybe there aren't, but _to me_ we feel very far away from finding limits that can't be scaled away, and any proposed scaling issues feel very much like Tsiolkovsky's "tyranny of the rocket equation".

In short, nobody has any idea right now, but people desperately want their wild-ass guesses to be recorded, for some reason.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: