Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This. So much this. One big problem with AI/ML as is currently eating the world is that it's just a really fancy averaging engine. It memorizes (incredibly, beautifully, superhumanly) but it doesn't really actually understand. There's a whole spate of "lets trick the AI" in image processing, I have to believe that there are easy ways to do the same with text.

That said, one counterargument could be that obfuscating an AI will lead to more confusing contracts which could actually end up making them harder to enforce. So perhaps in this case there's a counter-force.



I read a great, great paper about training systems against this. Obfuscation is currently easy against AI, but obfuscation that fools an AI and a human could still detect (Purposeful perturbations). This can be pre-trained against by incorporating such perturbations programmatically during training.

Link: https://arxiv.org/pdf/1705.06640.pdf


>There's a whole spate of "lets trick the AI" in image processing

Check out this "let's trick the human" in image processing:

https://en.wikipedia.org/wiki/Optical_illusion


"Let's trick the human" is already a large part of law.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: