Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>It's irrelevant whether the system possesses intelligence or will. If the completions it's making affect external systems, they can cause harm. The level of incoherence in the completions we're currently seeing suggests that at least some external-system-mutating completions would indeed be harmful.

One frame I've found useful is to consider LLMs as simulators; they aren't intelligent, but they can simulate a given agent and generate completions for inputs in that "personality"'s context. So, simulate Shakespeare, or a helpful Chatbot personality. Or, with prompt-hijacking, a malicious hacker that's using its coding abilities to spread more copies of a malicious hacker chatbot.

This pretty much my exact perspective on things too.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: