Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure what OPs particular point was, but Yan seemed to argue over and over again that testing Galactica with adversial inputs is why "we can't have nice things" which to me seems not just defensive but kind of comical.

Any AI model needs to be designed with adversarial usage in mind. And I don't even think random people trying to abuse the thing for five minutes to get it to output false or vile info counts as a sophisticated attack.

Clearly before they published that demo Facebook had again put zero thought into what bad actors can do with this technology and the appropriate response to people testing that out is certainly not blaming them.



Any AI model needs to be designed with adversarial usage in mind

Why? There's probably plenty of usage of ML where both the initial training set, its users and its outputs are internal to one company and hence well-controlled. Why should such a model be constructed with adversarial usage in mind, if such adversarial usage can be prevented by e.g. making it a fireable offense?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: