Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't see how von Neumann's work here helps at eliminating the problem and is arguably not particularly different than "just use more LLMs". His key result was to come up with a sufficient number of redundant computations to get the error below a threshold, which is still unreliable. This problem is worse because the fundamental issue is even trying to quantify what "correct" means.

Your suggestion at evaluating accuracy at the layers level necessarily implies there's some method of quantifiably detecting hallucinations. This is not necessarily possible given the particular attention models or even what is mathematically possible given an "infer this from finite text and no ability for independent verification"



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: