Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Everytime CPU stuff is discussed it is quickly hijacked into praising Apple.

Can we have a discussion about this CPU instead of talking about others? M1/M2 are not even the current top performing CPUs...



When I first loaded this article your comment was on top and had no comments. It's still on top and the comments are talking about M1/M2. You applied the Streisand Effect to yourself.


Probably because the M1/M2 are kings of the roost as far as consumer hardware goes. No consumer ARM chip comes close, and Intel needs 3-4x more power draw just to match Apple on performance.

The M1 is the stick by which all new ARM chips are going to be measured for a while.


> Intel needs 3-4x more power draw just to match Apple on performance.

The M1 Max is arguably price competitive with Intel 12900K, at half the performance and 1/6 wattage (at full load).

The M1 Ultra is twice the price, closer in performance, for 1/3 wattage.

https://youtu.be/LFQ3LkVF5sM?t=99


I hate Apple for lots of things. But they have done exceptional work with M1/M2. Imagine that power efficiency on servers. Saving so much energy.


> Imagine that power efficiency on servers. Saving so much energy.

There's no magic in those chips. Power efficiency is a design parameter you may optimize for or not, and ARM server chips dissipate more power because they have a lot more cores, more memory controllers, more cache, more IO, and, in general, a lot more stuff than a personal computer CPU would make sense to have. Computation per computation, they are not far from each other.

Intel's x86 ISA mandates a more computation to happen per amount of useful computation served - lots of memory write reordering, lots of instruction alignment (instructions can have from 1 to 15 bytes, IIRC) and other operations that do not translate into anything useful for the user beyond a higher IPC if your instruction stream is just right, but, also dissipate a lot of power for that.


Dropping the M2 into the same entry level Macbook Pro they sold last year gains you a couple of hours of battery life as well as a performance increase.

https://www.tomsguide.com/opinion/macbook-pro-2022-battery-l...

That's pretty impressive for staying on the same process node.


Truly amazing, most people (including me) thought after M1 perf gains will be marginal


That battery life test is understating the advantage Apple has over x86, probably because it's idling too much. The gap in a more CPU intensive web browsing test is more like 2x more than competing x86 ultralights [1].

[1] https://www.notebookcheck.net/Apple-MacBook-Pro-13-2022-M2-L...


But is Apple ever going to allow their CPUs to be used for anything other than their own products? If it isn't even an option then it is mostly irrelevant. I can't get a Windows laptop with an Apple M2 so it isn't worth checking how efficient of a CPU it is compared to Intel and AMD.


Actually it’s not irrelevant. Lots of people choose between a Mac and Windows and the M chips up the competitive pressure on Intel and AMD to improve their offerings.


My workplace gave me a choice between an M1 Macbook or an Intel-based Lenovo. Even though I'd never used a Mac full-time before, it was a no-brainer and I picked the Macbook. It absolutely does matter.


This is the argument Apple fans used to make during the PowerPC G4 days – “it doesn’t matter that Intel makes faster CPUs, because I can’t buy a Mac with them, so the comparison is irrelevant and not worth looking at”. Except, over time, it did matter as sales bled off to Windows machines.

Yeah, the OS matters, but it isn’t the only thing that matters to most people (and with the web, the loss-of-applications cost of switching your OS is a lot lower than it used to be).


Apple has set a high benchmark. AMD, Intel, Qualcomm and Microsoft are under pressure and have plans for M1 competitor.

Lots of Windows users migrating to Mac for performance reasons


It is exceptional work, but the system-on-a-chip design is a form of compromise, especially with regard to the availability and pricing of RAM configurations. The current MacBooks ship with the same amount of RAM as my Android phone.


I feel Mobile CPUs are fast enough from last 5-6 years. Its the battery tech which is lagging.

On Laptop/Server side Qualcomm need to catch up to AMD/Apple.


Yes, they are so fast that I would be happy to use my phone as a computer if the industry didn't make it close to impossible


My gut tells me that if we could, there would be phones catching on fire all over the place.

Although a phone has tons of power, and can probably function as a low-end desktop, the lack of a sizeable cooling solution, or even just a plain heatsink, would cause it to throttle pretty quickly.

I have to believe this is the reason, or otherwise someone would have done it already (successfully that is) for such an obvious use case.


> My gut tells me that if we could, there would be phones catching on fire all over the place.

If that was an issue you would see it already all around.

> Although a phone has tons of power, and can probably function as a low-end desktop, the lack of a sizeable cooling solution, or even just a plain heatsink, would cause it to throttle pretty quickly.

I would rather compare it to laptops and processing power of snapdragon 8 is rather in mid-upper segment of average available on the market laptop

ARM is also a different platform with way better computing performance per Watt ratio from regular PC, there is throttling, but when it kicks in it doesn't reduce your performance so significantly

> I have to believe this is the reason, or otherwise someone would have done it already (successfully that is) for such an obvious use case.

Believe, or not - the reasons why it wasn't done so far are far beyond technical reasons and we could have computer like capable phones long time ago already.


try to use ffmpeg on termux and on a regular laptop. beyond some slight timing difference, my tests on a random file gave me similar results which is amazing


Running a simple gradle build was 10 to 20 times slower.


Can you be specific? What was the phone and what was hardware on desktop that you're comparing here?



Speaking of battery-tech and mobile. I wish I can get "today phones" but with a "thick option". i.e add 2-3mm in thickness for greater battery capacity ?


Apple has made the last few generations of iPhones thicker to do exactly that.


How much do you want? Motorola has a couple of 6000 mAh battery options, and I think even Samsung has a few 5500 mAh models.


After my experiences with Android I will never ever buy another device with a Qualcomm SoC again.


After my experiences with Android, it'll be really difficult to make me buy an Android phone.

Compared to iOS, everything is slightly clunkier.


I'll take "slightly clunkier" before "extremely locked down and limited in unthinkable ways" any day, and not just for Android, but in general.


I had totally different experience, One i bought with Exynos was Battery hog, Mediatek ones generally have bad performance.


Could you explain why?


I'm also curious what were your experiences with android that were affected by Qualcomm?


Maybe they had an 810 crap out on them


IMHO that's because Apple was the first to bother optimizing an ARM CPU for performance instead of battery life.

The performance per watt in ARM exceeded x86 since long ago, but most manufacturers used that feat to create mobile CPUs instead of cranking up the power to see if it catches up to high performance x86


I think reason is that Qualcomm need to sell to Phone makers so they can't make their SoC very expensive.

On the other hand Apple can make expensive SoC without worrying about sales


Apple has a big advantage over Microsoft in terms of building PCs.

Apple can decide that it is switching from Intel to ARM and not looking back so it not only realizes costs from investing in ARM but it ultimately will realize savings from not having to support Intel.

There is no real unity behind Microsoft, Dell, Lenovo and all the other vendors, plus the motto of Windows users is "who moved my cheese?" You don't use Windows because Windows is the best operating system, you use Windows because it has the most software, and that software is written for and optimized for the x86 platform. Microsoft is always floating ARM-based systems as a hobby but it cannot make the commitment that it is going "all in" so it has to support Intel as the primary processor for the foreseeable future, so the ARM transition costs Microsoft but never saves them anything.

On top of that Microsoft is dependent on vendors like NVIDIA and AMD to supply drivers for graphics cards and all the other parts for their machines and getting them all to supply good drivers for ARM is yet another big ask. Apple can say "here is the supported hardware" and not have to fight with vendors who have the power to torpedo their ambitions.


At minimun, Microsoft could try for real. In 2019 Samsung announced the Galaxy Book S with an ARM processor. Looked like a nice compact laptop. I would have been interested in it, but it was basically never available and limited to Windows for ARM. Which isn't even officially sold by Microsoft, you can run it fine on the new Macs in a VM, but only as a part of the beta test access. Even if Microsoft cannot force the whole market to ARM - and might not even desire to do so - at least they could treat Windows 11 for ARM as a full product.


If Microsoft wanted to really try an ARM push they could do worse than releasing a Surface laptop with amazing out-of-the-box Linux support; it might not sell a ton to Windows users, but they could get something moving.


Thank you for a rather enlightening perspective. That explains a lot of big tech behaviors around the x86/ARM frontier that I had failed to grasp.

I find it somewhat ironic that in wanting for a few CEO's to "Embrace… Extinguish" Linux in favor of Windows, Microsoft eventually created a world where Windows is a sink on their resources they yet can't axe because though not anymore a money-maker in its own right, Windows remains the basis for pretty much all their money-makers. In becoming a "software-first" company, they pretty much followed if not provoked the very paradigm that makes Windows a non-monetizable entity next to as-good-as-embedded MacOS and ever-free Linux.

Linux which, incidentally, is also THE basis for money-makers worldwide and the OSS model means nobody using it today had to fork even 1% of the real cumulative cost of that kernel + ecosystem since the 1990's. How's that for a win, Linux 1:0 Microsoft — that the winner did not even seek, busy doing its own thing, while the other pretty much dug its own commercial grave out of 'relevance debt' (not sure how to word what Windows has become)…

TL;DR: it's like this big item taking all your inventory yet that you can't ditch ever because it's required to do all your quests. In games, you wish devs forgot about that crap and just made the inventory that much smaller. In real life, Microsoft can't kill Windows without closing business overnight. sad_pikachu_face.jpg

[Note that IMHO, MS is incredibly strong nowadays, so they'll find a way, and this is a fantastic learning experience for the industry.]


They're the top performing Arm CPUs.


Maybe? There exist some extreme-performance ARM CPUs like A64FX from Fujitsu.


They are the top performing ARM computers you can go to a store and buy. I'd love to have an HPE Ampere or a POWER tower under my desk, but it's not easy to get anything like those, and certainly not for a price competitive with generic x86.


Are we confusing things?

They're both RISC, but P9/P10/etc is based on IBM's own Power microarchitecture (possibly open sourced / open standard), I wasn't aware it had anything to do with Arm? I could be hopelessly wrong...

Or do you mean something different with "POWER"?


Owning an IBM Power system is a daydream of many nerds apparently.

And power is open in that the ISA is open. Power10 is a bang up to date CPU so the internals are all closed.


There are other interesting ISAs that are not arm64. Sadly, the only ones that still could be viable desktop (more like deskside) are either server-grade ARM or IBM's POWER (and none of them makes Mac Minis).


I would be shocked if an A64FX core could go toe to toe with a Firestorm on general code. It just doesn't make sense for Fujitsu's use case to spend nearly as many gates on general improvements versus more vector ALUS.


The A64FX is a very specific microarchitecture that's aimed at big core counts rather than being a single threaded monster.


Top performance CPUs in whose laptops?

My alder lake is faster than an M1, my alder lake needs 4 fans


Discussing chip design in contrast to other chips is useful for discussion.


"Perhaps more critically, the new core offers exclusive support for only AArch64 – dropping 32-bit support altogether."

I don't understand why ARM wants to throw away AArch32/Thumb/Jazelle et al.

This really shouldn't be in a phone - a lot of Android apps bundle 32-bit binaries for various purposes, and they won't run on this CPU.

I can see why Fujitsu mandated AArch64, but my phone isn't going to be on the Top 500, and there were benefits to these old instruction sets.


Binary compatibility is no longer a huge concern. By now only very old programs will require arm32 support and most current offers on any app store will have been recompiled for the latest and greatest (because software writers also want to look better on newer CPUs). Plus, if the new hardware is that much faster, you may not even need to pack native blobs.


Of the Raspberry PIs, only the 3rd generation is capable of AArch64.

Because of this, very few Linux distros supporting the PI are 64-bit (Oracle Linux is the only one that comes to mind).

There does indeed remain a large AArch32 community that cannot move, and they will never, ever cross this gulf.


I highly doubt that ARM is making decisions for the Cortex X3 core design on the basis of the Raspberry Pi market


A new chip not supporting arm32 does not prevent software for being built for the platform. While there are enough users, the distros will be maintained.


Emulation ought to suffice for support of programs compiled for 32 bits.


Barely any updated Android app has 32 bit binaries. You can't even submit an app on the Play Store if there's no 64bit version since August 2019.


There are also benefits to cleaning up silicon. Eg: less transistors per core can mean more cores or less power or both.


If this were the case, then it would be better still to design a totally new instruction set.

Why cling to any remnant?


AArch64 is, effectively, a totally new instruction set. It was designed to allow for fully regular decoding, unlike AArch32 & Thumb, which is a big part of why getting those off of the chip is a big win – it's not just the decoders you clean up, you can now make your pipeline and speculation deeper because you can peek N instructions into the future without having to decode the N-1 instructions in front of it first to figure out where the Nth instruction lies. That's part of why you see AArch64-only designs like this and the A12+ cores with much larger reorder buffers.


You’re right but they are the only ones shipping in something that isn’t a turd.


> M1/M2 are not even the current top performing CPUs...

What are?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: