Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a basically unsubstantiated intuition that there is some analog of the recursion theorem for LLMs, if it’s not itself applicable. If so it should be mathematically impossible to prevent prompt “hacking.”


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: