Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Any radiology AI that needs millions of training sets is useless in practice.

Why? I have no doubt that radiology AI might not be that useful (though radiologist friends of mine say AI is making an increasing impact on their field.) But this logic doesn't make sense. So what if an AI needs a million training examples or even a million training sets? Once your topology and weights are set, that net can be copied/used by others and you get a ready-to-go AI. There's an argument to be made that if training scale is what's needed to get to AGI, then maybe AGI is unrealizable, but that's not the same as saying a domain-specific AI is useless because it needs a large training set.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: