Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At the end of the day, they're building silicon that can do this to be ready for when the software side of the house actually figures this stuff out. Of course, it doesn't seem like the software side is close to this, and a very real risk for Apple is a world where the local AI use-cases don't really grow to justify this level of silicon investment. More specifically: Personal context is a big thing that Apple is uniquely positioned to capitalize on; but will a mobile-sized LLM and mobile-sized memory ever be able to coherently handle the volume of contextual data that might be necessary to be truly great? I have 400gb in iCloud, I don't want to get into the weeds of most of that being images and such; you don't need to in order to recognize that modern data center-scale LLMs can handle, like, less than a megabyte of context.

There will always be local-first use-cases, but its also possible that, you know, we're already near the global maxima of those use-cases, and the local AI coprocessors we've had can do it fine. This would be a severe shock to my perceived value of Apple right now, because my view is: their hardware division is firing on all cylinders and totally killing it. But when you're putting supercomputers into the iPad... maybe that doesn't actually matter. Meanwhile, their software is getting worse every year that goes by.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: