Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Those humans don't typically believe those hallucinated faces belong to people though nor do they call the cops.


You don't think a person has ever called the police because they hear a noise they thought was an intruder, or saw someone or something suspicious only in their mind? People make these kind of mistakes too.


Of course, but the consistency of the false positive is the issue. An able-minded person can readily reconcile their confusion.


An ML system generally can reconcile (and also avoid) this kind of confusion, with present technology. The example is more a question of responsible implementation than of a gap in the state of the art.


Then that’s a question of training data.


The problem with this line of reasoning is that it can be used as a non-constructive counter to any observation about AI failure. It’s always more and more training data or errors in the training set.

This really is a god-of-the-gaps answer to the concerns being raised.


No, my point is that if two systems show very similar classes of errors but at different thresholds with one trained on significantly more data than the more likely conclusion is that there isn't enough data in the other.


They aren't very similar errors, ML solutions are equally accurate as humans in at a glance performance but longer and humans clearly wins. I'd say that the system is similar to humans in some ways, but humans have a system above that which is used to check if the results makes sense or not, that above system is completely lacking from modern ML theory and it doesn't seem to work like our neural net models at all (the brain isn't a neural net).


Don't most high-end machine learning solutions have more training data than a human could consume in a lifetime?


I don't think there is a realistic way to make that comparison.

For consideration, our brains start with architecture and connections that have evolved over a billion years (give or take) of training. Then we are exposed to a lifetime of embodied experience coming in through 5 (give or take) senses.

ML is picking out different things, but it's not obvious to me that models are actually getting more data then we have been trained on. Certainly GPT has seen more text, but I don't think that comparing that to a person's training is any more meaningful than saying we'll each encounter tens of thousands of hours of HD video during our training.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: