Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They don't need to solve the problem of reasoning, they only need to simulate reasoning well enough.

They are getting pretty good, people already have to "try" a little bit to find examples where GPT-3 or DALL-E are wrong. Give it a few more billion parameters and training data, and GPT-10 might still be as dumb as GPT-3 but it'll be impossible/irrelevant to prove.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: