Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If future AIs in warefare are designed for efficient win probability and not win margin (like AlphaGo), I think it won't be what people will expect. That alone speaks of the bias people tend to have with wanting to gain a greater advantages when they think they are behind. I havn't looked thoroughly, but I would not be surprised if that is a major factor in escalation of violence and perpetuation of war. An AI, on the other hand, that is going for the most efficient win condition might not do that.

For students on the art of war, war rests upon a framework of asymmetry and unfair advantages. Even if the nations agree to some sort of rules of war or rules of engagement, there is always a seeking of unfair advantages -- cheats, if you will. This most often involves deception and information asymmetry. Or to put it in another way, allowing the other side to see what they want to see, in order to create unfair advantages.

So I think, what would be scary isn't the AI as implemented along the lines of AlphaGo, but an AI that is trained to deceive and cheat in order to win. And the funny thing is that, such an AI would be created from our own darkest shadows and creative ability to wreak havoc -- and instead of examining our own human nature, we'll blame the AIs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: