Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, for one, by eliminating external tool calling, the model gains an amount of security. This occurs because the tools being called by an LLM can be corrupted, and in this scenario corrupted tools would not be called.


Prompt injection is still a possibility, so while it improves the security posture, not by much.


Prompt injection will always be a possibility, it's a direct consequence of the fundamental nature being a fully general tool.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: