Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> but an almost-memorized computation or proof is likely to be plain wrong

hard to tell. never seen anyone trying it. model may almost-memorize and then fill the gaps at inference time as it's still doing some 'thinking'. But the main idea here is that there is a risk that model will spill out pieces of training data. OAI likely would not risk it at $100B++ valuation.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: