Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, my point is that if two systems show very similar classes of errors but at different thresholds with one trained on significantly more data than the more likely conclusion is that there isn't enough data in the other.


They aren't very similar errors, ML solutions are equally accurate as humans in at a glance performance but longer and humans clearly wins. I'd say that the system is similar to humans in some ways, but humans have a system above that which is used to check if the results makes sense or not, that above system is completely lacking from modern ML theory and it doesn't seem to work like our neural net models at all (the brain isn't a neural net).


Don't most high-end machine learning solutions have more training data than a human could consume in a lifetime?


I don't think there is a realistic way to make that comparison.

For consideration, our brains start with architecture and connections that have evolved over a billion years (give or take) of training. Then we are exposed to a lifetime of embodied experience coming in through 5 (give or take) senses.

ML is picking out different things, but it's not obvious to me that models are actually getting more data then we have been trained on. Certainly GPT has seen more text, but I don't think that comparing that to a person's training is any more meaningful than saying we'll each encounter tens of thousands of hours of HD video during our training.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: