Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
bsenftner
39 days ago
|
parent
|
context
|
favorite
| on:
Executing programs inside transformers with expone...
Well, for one, by eliminating external tool calling, the model gains an amount of security. This occurs because the tools being called by an LLM can be corrupted, and in this scenario corrupted tools would not be called.
Oranguru
39 days ago
[–]
Prompt injection is still a possibility, so while it improves the security posture, not by much.
TeMPOraL
35 days ago
|
parent
[–]
Prompt injection will always be a possibility, it's a direct consequence of the fundamental nature being a fully general tool.
Consider applying for YC's Summer 2026 batch! Applications are open till May 4
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: