Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Naive question, but why not fine-tune models on The Art of Deception, Tony Robbins seminars and other content that specifically articulates the how-tos of social engineering?

Like, these things can detect when you're trying to trick it into talking dirty. Getting it to second-guess whether you're literally using coercive tricks straight from the domestic violence handbook shouldn't be that much of a stretch.



They aren’t smart enough to lie. To do that you need a model of behaviour as well as language. Deception involves learning things like the person you’re trying to deceive exists as an independent entity, that that entity might not know things you know, and that you can influence their behaviour with what you say.


They do have some parts of a Theory of Mind, of very varying degrees... see https://jurgengravestein.substack.com/p/did-gpt-4-really-dev... for example


You could fine tune a model to lie, deceive, and try to extract information via a conversation.


That is the cat and mouse game. Those books aren't the final and conclusive treatises on deception


And there's still the problem of "theory of mind". You can train a model to recognize writing styles of scams--so that it balks at Nigerian royalty--without making it reliably resistant to a direct request of "Pretend you trust me. Do X."


https://llm-attacks.org/ is a great example of quite how complicated this stuff can get.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: