> Also, people often mistake the reason for an NPU is "speed". That's not correct. The whole point of the NPU is rather to focus on low power consumption.
I have a sneaking suspicion that the real real reason for an NPU is marketing. "Oh look, NVDA is worth $3.3T - let's make sure we stick some AI stuff in our products too."
The correct way to make a true "NPU" is to 10x your memory bandwidth and feed a regular old multicore CPU with SIMD/vector instructions (and maybe a matrix multiply unit).
Most of these small NPUs are actually made for CNNs and other models where "stream data through weights" applies. They have a huge speedup there. When you stream weights across data (any LLM or other large model), you are almost certain to be bound by memory bandwidth.
Most of the toolchain got hidden behind openvino and there was no hardware released for years. Keembay was 'next year' for years. I have some code for DSP using it that I can't use anymore. Has Intel actually released new shave cores, with an actual dev environment ? I'm curious.
The politics behind the software issues are complex. At least from the public presentation the new SHAVE cores are not much changed besides bigger vector units. I don't know what it would take to make a lower level SDK available again but it sure seems like it would be useful.
Microsoft needs to throw something in the gap to slow down MacBook attrition.
The M processors changed the game. My teams support 250k users. I went from 50 MacBooks in 2020 to over 10,000 today. I added zero staff - we manage them like iPhones.
Microsoft is slowly being squeezed from both sides of the market. Chromebooks have silently become wildly popular on the low end. The only advantage I see windows have is corporate and gaming. But valve is slowly chopping away at the gaming advantage as well.
Chromebooks are no where to be seen outside US school market.
Coffe shops, trains and airports in Europe? Nope, rare animal on tables.
European schools? Most countries parents buy their kids a computer, and most often it is a desktop used by the whole family, or a laptop of some kind running Windows, unless we are talking about the countries where buying Apple isn't an issue on the monthly expenses.
Popular? In Germany, the few times they get displayed on shopping mall stores, they get rountinely discounted, or bundled with something else, until finally they get rid of them.
Valve is heavily dependent on game studios producing Windows games.
The M processor really did completely eliminate all sense of “lag” for basic computing (web browsing, restarting your computer, etc). Everything happens nearly instantly, even on the first generation M1 processor. The experience of “waiting for something to load” went away.
Not to mention these machines easily last 5-10 years.
It's fine. For basic computing, my M3 doesn't feel much faster than my Linux desktop that's like 8 years old. I think the standard for laptops was just really, really low.
> I think the standard for laptops was just really, really low.
As someone who used windows laptops, I was amazed when I saw someone sitting next to me on a public transit subway on her MacBook Pro editing images on photoshop with just her trackpad. The standard for windows laptops used to be that low (about ten or twelve years ago?) that seeing a MacBook trackpad just woke someone is a part of my permanent memory.
I don't understand the hype around Apple trackpads. 15 years ago, sure, there was a huge gulf of difference, but today? The only difference that I can see or fee, at least between lenovo or dell and apple, is that the mac trackpad is physically larger.
As a very happy M1 Max user (should've shelled out for 64GB of RAM, though, for local LLMs!), I don't look forward to seeing how the Google Workspace/Notions/etc. of the world somehow reintroduce lag back in.
The problem for Intel and AMD is they are stuck with an OS that ships with a lag-inducing Anti-malware suite. I just did a simple git log and it took 2000% longer than usual because the Antivirus was triggered to scan and run a simulation on each machine instruction and byte of data accessed. The commit log window stayed blank waiting to load long enough for me to complete another tiny project. It always ruin my day.
Pro tip: turn off malware scanning in your git repos[0]. There is also the new Dev Drive feature in Windows 11 that makes it even easier for developers (and IT admins) to set this kind of thing up via policies[1].
In companies where I worked where the IT team rolled out "security" software to the Mac-based developers, their computers were not noticeably faster than Windows PCs at all, especially given the majority of containers are still linux/amd64, reflecting the actual deployment environment. Meanwhile Windows also runs on ARM anyway, so it's not really something useful to generalize about.
Unfortunately, the IT department people think they are literal GODs for knowing how to configure Domain Policies and lock down everything. They even refuse to help or even answer requests for help when there are false positives on our own software builds that we cannot unmark as false positives. These people are proactively antagonistic to productivity. Management could not careless…
Nobody wants to be resonsible for giving allowing exceptions in security-matters. Its far easier to ignore the problems at hand, then to risk being wrong just once.
They don't think they're gods, they just think you're an idiot. This is not to say that you are, or even that they believe YOU individually are an idiot, it's just that users are idiots.
There are also insurance, compliance, and other constraints that IT folks have that make them unwilling to turn off scanning for you.
To be fair, the average employee doesn’t have much more than idiot-level knowledge when it comes to security.
The majority of employees would rather turn off automatic OS updates simply because it’s a hassle to restart your computer because god forbid they you loose those 250 chrome tabs waiting for you to never get around to revisiting!
the short answer is that you can't without the necessary permissions, and even if you do - the next roll out will wipe out your changes.
So the pro-part of the tip does not apply.
On my own machines anti-virus is one the very first things to be removed. Most of the time I'd turn off all the swap file, yet Windows doesn't overcommit and certain applications are notorious for allocating memory w/o even using it.
Chrome managed it. Not sure how since Edge still works reasonably well and Safari is instant to start (even faster than system settings, which is really an indictment of SwiftUI).
I have a first gen M1 and it holds up very nicely even today. I/O is crazy fast and high compute loads get done efficiently.
One can bury the machine and lose very little basic interactivity. That part users really like.
Frankly the only downside of the MacBook Air is the tiny storage. The 8GB RAM is actually enough most of the time. But general system storage with only 1/4 TB is cramped consistently.
Been thinking about sending the machine out to one of those upgrade shops...
Not OP, but by booting M1 from external thunderbolt nvme you lose less than 50% of benchmark disk throughput (3GB/s is still ridiculously fast), can buy 8TB drive for less than 1k, plus can boot it on another M1 mac if something happens.
If there was "max mem, min disk" model, would def get that.
Interesting. You know I bought one of those USB 3 port expanders from TEMU and it is excellent! (I know, TEMU right? But it was so cheap!)
I could 3d print a couple of brackets and probably lodge a bigger SSD or the smaller form factor eMMC I think and pack it all into a little package one just plugs in. The port extender is currently shaped such that it fits right under the Air tilting it nicely for general use.
The Air only has external USB... still, I don't need to boot from it. The internal one can continue to do that. Storage is storage for most tasks.
In our company we see the opposite. 5 years ago all the devs wanted Mac instead of Linux. Now they want to go back.
I think part of the reason is that we manage Mac pretty strictly now but we're getting there with Linux too.
We also tried to get them to use WSL 1 and 2 but they just laugh at it :) And point at its terrible disk performance and other dealbreakers. Can't blame them.
I assume you're both right. I'm sure NPUs exist to fill a very real niche, but I'm also sure they're being shoehorned in everywhere regardless of product fit because "AI big right now."
Looking at it slightly differently: putting low-power NPUs into laptop and phone SoCs is how to get on the AI bandwagon in a way that NVIDIA cannot easily disrupt. There are plenty of systems where a NVIDIA discrete GPU cannot fit into the budget (of $ or Watts). So even if NPUs are still somewhat of a solution in search of a problem (aka a killer app or two), they're not necessarily a sign that these manufacturers are acting entirely without strategy.
The shoehorning only works if there is buyer demand.
As a company, if customers are willing to pay a premium for a NPU, or if they are unwilling to buy a product without one, it is not your place to say “hey we don’t really believe in the AI hype so we’re going to sell products people don’t want to prove a point”
If they shove it in every single product and that’s all anyone advertises, whether consumers know it will help them or not, you don’t get a lot of choice.
If you want the latest chip, you’re getting AI stuff. That’s all there is to it.
"The math is clear: 100% of our our car sales come from models with our company logo somewhere on the front, which shows incredible customer desire for logos. We should consider offering a new luxury trim level with more of them."
To some degree I understand it, because as we’ve all noticed computers have pretty much plateaued for the average person. They last much longer. You don’t need to replace them every two years anymore because the software isn’t out stripping them so fast.
AI is the first thing to come along in quite a while that not only needs significant power but it’s just something different. It’s something they can say your old computer doesn’t have that the new one does. Other than being 5% faster or whatever.
So even if people don’t need it, and even if they notice they don’t need it, it’s something to market on.
The stuff up thread about it being the hotness that Wall Street loves is absolutely a thing too.
Apple will have a completely AI capable product line in 18 months, with the major platforms basically done.
Microsoft is built around the broken Intel tick/tick model of incremental improvement — they are stuck with OEM shitware that will take years to flush out of the channel. That means for AI, they are stuck with cloud based OpenAI, where NVIDIA has them by the balls and the hyperscalers are all fighting for GPU.
Apple will deliver local AI features as software (the hardware is “free”) at a much higher margin - while Office 365 AI is like $400+ a year per user.
You’ll have people getting iPhones to get AI assisted emails or whatever Apple does that is useful.
The stuff they've been trying to sell AI to the public with is increasingly looking as absurd as every 1978 "you'll store your recipes on the home computer" argument.
AI text became a Human Centipede story: Start with a coherent 10-word sentence, let AI balloon it into five pages of flowery nonsense, send it to someone else, who has their AI smash it back down to 10 meaningful words.
Coding assistance, even as spicy autocorrect, is often a net negative as you have to plow through hallucinations and weird guesses as to what you want but lack the tools to explain to it.
Image generation is already heading rapidly into cringe territory, in part due to some very public social media operations. I can imagine your kids' kids in 2040 finding out they generated AI images in the 2020s and looking at them with the same embarrassment you'd see if they dug out your high-school emo fursona.
There might well be some more "closed-loop" AI applications that make sense. But are they going to be running on every desktop in the world? Or are they going to be mostly used in datacentres and purpose-built embedded devices?
I also wonder how well some of the models and techniques scale down. I know Microsoft pushed a minimum spec to promote a machine as Copilot-ready, but that seems like it's going to be "Vista Basic Ready" redux as people try to run tools designed for datacentres full of Quadro cards, or at least high-end GPUs, on their $299 HP laptop.
Cringe emo girls are trendy now because the nostalgia cycle is hitting the early 2000s. Your kid would be impressed if you told them you were a goth gf. It's not hard to imagine the same will happen with primitive AIs in the 40s.
Bela Lugosi Died in 1979, and Peter Murphy was onto his next band by 1984.
By 2000 Goth was fully a distant dot in the rear view mirror for the OG's
In 2002, Murphy released *Dust* with Turkish-Canadian composer and producer Mercan Dede, which utilizes traditional Turkish instrumentation and songwriting, abandoning Murphy's previous pop and rock incarnations, and juxtaposing elements from progressive rock, trance, classical music, and Middle Eastern music, coupled with Dede's trademark atmospheric electronics.
I'm not sure what "gothic music existed in the 1980s" is meant to indicate as a response to "goths existed in the early 2000s as a cultural archetype".
True Goth died our way before any of that. They totally sold out when the sacked Rome, the gold went to their heads and everything since then has been nostalgia.
I expect this sort of thing to go out of fashion and/or be regulated after "AI" causes some large life loss, e.g. starting a war or designing a collapsing building.
I hope that once they get a baseline level of AI functionality in, they start working with larger LLMs to enable some form of RAG... that might be their next generational shift.
> while Office 365 AI is like $400+ a year per user
And I'm pretty sure this is only Introductory pricing. As people get used to it and use it more it won't cover the cost. I think they rely on the gym model currently; many people not using the ai features much. But eventually that will change. Also, many companies figured that out and pull the copilot license from users that don't use it enough.
Until AI chips become abundant, and we are not there yet, cloud AI just makes too much sense. Using a chip constantly vs using it 0.1% of the time is just so many orders of magnitude better.
Local inference does have privacy benefits. I think at the moment it might make sense to send most of queries to a beefy cloud model, and send sensitive queries to a smaller local one.
Apple hasn’t shipped any ai features besides betas. I trust the people responsible for the useless abomination that is Siri to deliver a useful ai tool as much as I would trust Joe Biden to win a breakdancing competition.
Well fortunately for all of us the people delivering client side ML today are totally different from the people who implemented a server side rule base assistant 10 years ago.
The fact that they couldn’t deliver something even approaching kindergarten levels of understanding a year ago makes me worry that either zero of the people who know what they’re doing in present-day “AI” work at Apple, or, plenty of great minds do but they can’t get anything done because Apple’s management is too conservative to release something that would be vastly more powerful than Siri but might possibly under certain circumstances hurt someone’s feelings or otherwise embarrass Apple.
Nothing would make me happier than to finally be wrong betting against “Siri,” though.
The real consumers of the NPUs are the operating systems themselves. Google’s TPU and Apple’s ANE are used to power OS features like Apple’s Face ID and Google’s image enhancements.
We’re seeing these things in traditional PCs now because Microsoft has demanded it so that Microsoft can use it in Windows 11.
Any use by third party software is a lower priority
That’s how we got an explosion of interesting hardware in the early 80s - hardware companies attempting to entice consumers by claiming “blazing 16 bit speeds” or other nonsense. It was a marketing circus but it drove real investments and innovation over time. I’d hope the same could happen here.
I can’t find TDP for Apple’s Neural Engine (https://en.wikipedia.org/wiki/Neural_Engine), but the first version shipped in the iPhone 8, which has a 7 Wh battery, so these are targeting different markets.
The derivative meaning has been use so widely that it has surpassed its original one in usage. But it doesn’t change the fact that it originally refers to the fingers.
I have a sneaking suspicion that the real real reason for an NPU is marketing. "Oh look, NVDA is worth $3.3T - let's make sure we stick some AI stuff in our products too."