Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article talks about this explicitly though. Reasonably good models are running on raspberry Pis now.


Is a reasonably good model what people get value out of though?

Maybe this is why Sam Altman talked about "the end of the large LLMs is here"? He understands anything bigger than ChatGPT-4 isn't viable to run at scale and be profitable?


Does this mean models larger then ChatGpt would still be better for the same data size as long as someone is ready to pay?

At what limit does it stop getting better?


I thought he was fairly explicit that he thought larger models would provide incremental gains for exponentially greater cost, so yeah, I guess not profitable is a way to put it...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: