Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Where did you see the matmul acceleration support? I couldn't find this detail online.


Apple calls it "Neural Accelerators". It's all over their A19 marketing.


What a ridiculous way to market "linear algebra transistor array".


Hey man, it helps you think different. You just never knew your neurons needed accelerating.


I accelerate them every morning with an Americano.


I have to ask out of curiosity, why is your first comment made with one account, and the reply with a similarly-named alt?


To confuse all those neural accelerators scraping this conversation.


That seems incredibly prescient for accounts created before even GPT-1. Obviously broad data scraping existed before then, but even amongst this crowd I find it hard to believe that’s the real motivator.


Account on laptop, account on mobile.


I really hope someone got fired for this blunder


Which means what, exactly, to someone whose not a machine learning researcher?


Don’t all of the M series chips contain neural cores?


Yes, they do. They're called Neural Engine, aka NPUs. They aren't being used for local LLMs on Macs because they are optimized for power efficiency running much smaller AI models.

Meanwhile, the GPU is powerful enough for LLMs but has been lacking matrix multiplication acceleration. This changes that.


The neural engine is used for the built-in LLM that does text summaries etc., just not third party LLMs.

And there's an official port of Stable Diffusion to it: https://github.com/apple/ml-stable-diffusion


I thought 1 of the reason we do ML on GPU is fast Matrix multiplication ?

So the new engine is accelerator for matmul accelerator ?


From a compute perspective, GPUs are mostly about fast vector arithmetic, with which you can implement decently fast matrix multiplication. But starting with NVIDIA's Volta architecture at the end of 2017, GPUs have been gaining dedicated hardware units for matrix multiplication. The main purpose of augmenting GPU architectures with matrix multiplication hardware is for machine learning. They aren't directly useful for 3D graphics rendering, but their inclusion in consumer GPUs has been justified by adding ML-based post-processing and upscaling like NVIDIA's various iterations of DLSS.


These are different these are built into the GPU Cores


Does this mean that equivalent logic for what has been called Neural Engine is now integrated into each CPU core?


Each GPU core, but yes, this was part of what they announced today - it’s now integral rather than separate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: