Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It could be. Or just smarter caching (which wouldn't necessarily have to do with model intelligence). Or just overfitting on the 95% most common prompts (which could save tokens but make the models less intelligent/flexible).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: