Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Still not getting it I think.

My point is: LLMs sound very plausible and very confident when they are wrong.

That’s it. And I was just offering a trick to help remembering this, to keep checking their output – nothing else.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: