Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This sounds right to me and was similar to my reaction. The doubt I had reading this piece is that GPT4 is so substantially better than GPT3 on most general tasks that I feel silly using GPT3 even if it could potentially be sufficient.

Won't any company that can stay a couple years ahead of open source for something this important will be dominant as long as it can do this?

Can an open source community fine tuning on top of a smaller model consistently surpass a much larger model for the long tail of questions?

Privacy is one persistent advantage of open source, especially if we think companies are too scared of model weights leaking to let people run models locally. But copyright licenses give companies a way to protect their models for many use cases, so companies like Google could let people run models locally for privacy and still have a moat, if that's what users want, and anyway most users will prefer running things in the cloud for better speed and to not have to store gigabytes of data on their devices, no?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: