When I first loaded this article your comment was on top and had no comments. It's still on top and the comments are talking about M1/M2. You applied the Streisand Effect to yourself.
Probably because the M1/M2 are kings of the roost as far as consumer hardware goes. No consumer ARM chip comes close, and Intel needs 3-4x more power draw just to match Apple on performance.
The M1 is the stick by which all new ARM chips are going to be measured for a while.
> Imagine that power efficiency on servers. Saving so much energy.
There's no magic in those chips. Power efficiency is a design parameter you may optimize for or not, and ARM server chips dissipate more power because they have a lot more cores, more memory controllers, more cache, more IO, and, in general, a lot more stuff than a personal computer CPU would make sense to have. Computation per computation, they are not far from each other.
Intel's x86 ISA mandates a more computation to happen per amount of useful computation served - lots of memory write reordering, lots of instruction alignment (instructions can have from 1 to 15 bytes, IIRC) and other operations that do not translate into anything useful for the user beyond a higher IPC if your instruction stream is just right, but, also dissipate a lot of power for that.
Dropping the M2 into the same entry level Macbook Pro they sold last year gains you a couple of hours of battery life as well as a performance increase.
That battery life test is understating the advantage Apple has over x86, probably because it's idling too much. The gap in a more CPU intensive web browsing test is more like 2x more than competing x86 ultralights [1].
But is Apple ever going to allow their CPUs to be used for anything other than their own products? If it isn't even an option then it is mostly irrelevant. I can't get a Windows laptop with an Apple M2 so it isn't worth checking how efficient of a CPU it is compared to Intel and AMD.
Actually it’s not irrelevant. Lots of people choose between a Mac and Windows and the M chips up the competitive pressure on Intel and AMD to improve their offerings.
My workplace gave me a choice between an M1 Macbook or an Intel-based Lenovo. Even though I'd never used a Mac full-time before, it was a no-brainer and I picked the Macbook. It absolutely does matter.
This is the argument Apple fans used to make during the PowerPC G4 days – “it doesn’t matter that Intel makes faster CPUs, because I can’t buy a Mac with them, so the comparison is irrelevant and not worth looking at”. Except, over time, it did matter as sales bled off to Windows machines.
Yeah, the OS matters, but it isn’t the only thing that matters to most people (and with the web, the loss-of-applications cost of switching your OS is a lot lower than it used to be).
It is exceptional work, but the system-on-a-chip design is a form of compromise, especially with regard to the availability and pricing of RAM configurations. The current MacBooks ship with the same amount of RAM as my Android phone.
My gut tells me that if we could, there would be phones catching on fire all over the place.
Although a phone has tons of power, and can probably function as a low-end desktop, the lack of a sizeable cooling solution, or even just a plain heatsink, would cause it to throttle pretty quickly.
I have to believe this is the reason, or otherwise someone would have done it already (successfully that is) for such an obvious use case.
> My gut tells me that if we could, there would be phones catching on fire all over the place.
If that was an issue you would see it already all around.
> Although a phone has tons of power, and can probably function as a low-end desktop, the lack of a sizeable cooling solution, or even just a plain heatsink, would cause it to throttle pretty quickly.
I would rather compare it to laptops and processing power of snapdragon 8 is rather in mid-upper segment of average available on the market laptop
ARM is also a different platform with way better computing performance per Watt ratio from regular PC, there is throttling, but when it kicks in it doesn't reduce your performance so significantly
> I have to believe this is the reason, or otherwise someone would have done it already (successfully that is) for such an obvious use case.
Believe, or not - the reasons why it wasn't done so far are far beyond technical reasons and we could have computer like capable phones long time ago already.
try to use ffmpeg on termux and on a regular laptop. beyond some slight timing difference, my tests on a random file gave me similar results which is amazing
Speaking of battery-tech and mobile.
I wish I can get "today phones" but with a "thick option". i.e add 2-3mm in thickness for greater battery capacity ?
IMHO that's because Apple was the first to bother optimizing an ARM CPU for performance instead of battery life.
The performance per watt in ARM exceeded x86 since long ago, but most manufacturers used that feat to create mobile CPUs instead of cranking up the power to see if it catches up to high performance x86
Apple has a big advantage over Microsoft in terms of building PCs.
Apple can decide that it is switching from Intel to ARM and not looking back so it not only realizes costs from investing in ARM but it ultimately will realize savings from not having to support Intel.
There is no real unity behind Microsoft, Dell, Lenovo and all the other vendors, plus the motto of Windows users is "who moved my cheese?" You don't use Windows because Windows is the best operating system, you use Windows because it has the most software, and that software is written for and optimized for the x86 platform. Microsoft is always floating ARM-based systems as a hobby but it cannot make the commitment that it is going "all in" so it has to support Intel as the primary processor for the foreseeable future, so the ARM transition costs Microsoft but never saves them anything.
On top of that Microsoft is dependent on vendors like NVIDIA and AMD to supply drivers for graphics cards and all the other parts for their machines and getting them all to supply good drivers for ARM is yet another big ask. Apple can say "here is the supported hardware" and not have to fight with vendors who have the power to torpedo their ambitions.
At minimun, Microsoft could try for real. In 2019 Samsung announced the Galaxy Book S with an ARM processor. Looked like a nice compact laptop. I would have been interested in it, but it was basically never available and limited to Windows for ARM. Which isn't even officially sold by Microsoft, you can run it fine on the new Macs in a VM, but only as a part of the beta test access.
Even if Microsoft cannot force the whole market to ARM - and might not even desire to do so - at least they could treat Windows 11 for ARM as a full product.
If Microsoft wanted to really try an ARM push they could do worse than releasing a Surface laptop with amazing out-of-the-box Linux support; it might not sell a ton to Windows users, but they could get something moving.
Thank you for a rather enlightening perspective. That explains a lot of big tech behaviors around the x86/ARM frontier that I had failed to grasp.
I find it somewhat ironic that in wanting for a few CEO's to "Embrace… Extinguish" Linux in favor of Windows, Microsoft eventually created a world where Windows is a sink on their resources they yet can't axe because though not anymore a money-maker in its own right, Windows remains the basis for pretty much all their money-makers. In becoming a "software-first" company, they pretty much followed if not provoked the very paradigm that makes Windows a non-monetizable entity next to as-good-as-embedded MacOS and ever-free Linux.
Linux which, incidentally, is also THE basis for money-makers worldwide and the OSS model means nobody using it today had to fork even 1% of the real cumulative cost of that kernel + ecosystem since the 1990's. How's that for a win, Linux 1:0 Microsoft — that the winner did not even seek, busy doing its own thing, while the other pretty much dug its own commercial grave out of 'relevance debt' (not sure how to word what Windows has become)…
TL;DR: it's like this big item taking all your inventory yet that you can't ditch ever because it's required to do all your quests. In games, you wish devs forgot about that crap and just made the inventory that much smaller. In real life, Microsoft can't kill Windows without closing business overnight. sad_pikachu_face.jpg
[Note that IMHO, MS is incredibly strong nowadays, so they'll find a way, and this is a fantastic learning experience for the industry.]
They are the top performing ARM computers you can go to a store and buy. I'd love to have an HPE Ampere or a POWER tower under my desk, but it's not easy to get anything like those, and certainly not for a price competitive with generic x86.
They're both RISC, but P9/P10/etc is based on IBM's own Power microarchitecture (possibly open sourced / open standard), I wasn't aware it had anything to do with Arm? I could be hopelessly wrong...
There are other interesting ISAs that are not arm64. Sadly, the only ones that still could be viable desktop (more like deskside) are either server-grade ARM or IBM's POWER (and none of them makes Mac Minis).
I would be shocked if an A64FX core could go toe to toe with a Firestorm on general code. It just doesn't make sense for Fujitsu's use case to spend nearly as many gates on general improvements versus more vector ALUS.
Binary compatibility is no longer a huge concern. By now only very old programs will require arm32 support and most current offers on any app store will have been recompiled for the latest and greatest (because software writers also want to look better on newer CPUs). Plus, if the new hardware is that much faster, you may not even need to pack native blobs.
A new chip not supporting arm32 does not prevent software for being built for the platform. While there are enough users, the distros will be maintained.
AArch64 is, effectively, a totally new instruction set. It was designed to allow for fully regular decoding, unlike AArch32 & Thumb, which is a big part of why getting those off of the chip is a big win – it's not just the decoders you clean up, you can now make your pipeline and speculation deeper because you can peek N instructions into the future without having to decode the N-1 instructions in front of it first to figure out where the Nth instruction lies. That's part of why you see AArch64-only designs like this and the A12+ cores with much larger reorder buffers.
Can we have a discussion about this CPU instead of talking about others? M1/M2 are not even the current top performing CPUs...