Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's always been the case and was obvious to many from the start.

It really wont be that long until we see some ~GPT4 llm embedded locally in a chip on the next iPhone release...



Are you aware, what hardware is currently needed to run GPT4?

Something bigger than a smartphone usually.

So small mobile optimized LLMs will come, or are rather already there - but if they would manage to make the big GPT4 modell run on an iPhone, that would be a pretty big thing in itself, way larger than GPT5.


But llms are relatively rarely used, and on the other hand, perf/latency is important to ux, and perf is variable(simple question, complex question, visual work).

Those demand are better fullfiled at the cloud.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: