Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> These LLMs seem so smart.

Yes, they do *seem* smart. My experience with a wide variety of LLM-based tools is that they are the industrialization of the Dunning-Kruger effect.



It's more likely the opposite. Humans rationalize their errors out the wazoo. LLMs are showing us we really aren't very smart at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: