Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn’t that just human bias seeping through into the data set, so of course the neural net trained on that will show similar biases. The problem here is the human element.


As they say, it's not a technical problem, it's a people problem. But it's not "just" human, it's that the field in general is elevating ML, AI, whatever you want to call it, with hype like "algorithms aren't biases like a human would be", which is technically true, but also trivial. The people creating these systems didn't even consider that they would reflect and even enshrine, with all kind of high-priest-of-technology-blessings, their biases, that's why we got Tay and why things PredPol is terrible. The key is to acknowledge and actively protect against systematic bias, not make a business of it (coughtwitterfacebookcough).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: