Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
User23
on Jan 30, 2024
|
parent
|
context
|
favorite
| on:
Ask HN: Are you using a GPT to prompt-engineer ano...
I have a basically unsubstantiated intuition that there is some analog of the recursion theorem for LLMs, if it’s not itself applicable. If so it should be mathematically impossible to prevent prompt “hacking.”
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: