Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Plus even if Apple is using their own chips for inferencing, they're still driving more demand for training, which Nvidia still has locked down pretty tight.


Apple said they’re using their own silicon for training.

Edit: unless I misunderstood and they meant only inference.


without more details hard to say, but i seriously doubt they trained any significantly large LM on their own hardware

people on HN routinely seem to overestimate Apple's capabilities

e: in fact, iirc just last month Apple released a paper unveiling their 'OpenElm' language models and they were all trained on nvidia hardware


Interesting, I thought Apple Silicon mainly excelled at inferencing. Though I suppose the economics of it are unique for Apple themselves since they can fill racks full of barebones Apple Silicon boards without having to pay their own retail markup for complete assembled systems like everyone else does.


They trained GPT-4o on Apple Silicon? I find that hard to believe, surely they only mean that some models were trained with Apple Silicon.


Not GPT-4o, their own models that power some (most?) of the “Apple Intelligence” stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: