If there was an actual energy efficiency advantage -- i.e. less power consumed for the same amount of work -- Google would already be 100% on ARM. Why do you think they would leave that on the table?
I realize the situation changes every time a new CPU comes out, but I have never personally seen a real workload where ARM won on energy efficiency and had reasonable performance. Tests like [1] and [2] showing x86 having an orders-of-magnitude lead on database performance vs. AWS Gravitron2 should give you serious pause.
Those tests are pathological cases because of bugs, and aren't representative of the performance of the hardware. (for that one I suspect that ARMv8.1-A atomics weren't used in compiler options...)
Well, the listed GCC options do not specify microarchitecture for either ARM or x86. So it's probably a k8-compatible binary on the top line, too. I'm also not sure why atomics would be important in a single-client mysql benchmark. Either way, the risk that your toolchain doesn't automatically do the right thing is one of the costs that keeps people from jumping architectures.
I presume this has to do with support for ARM with the database software.
PostgreSQL appears to have had ARM support for sometime. Mysql only added it with 8.0. As far as the other database options are concerned, ARM isn't even supported yet.
>If there was an actual energy efficiency advantage -- i.e. less power consumed for the same amount of work -- Google would already be 100% on ARM.
Energy Efficiency Wins on Server Workload is a very recent thing. To the point it is new as it basically start with N1 / AWS Graviton2. And even that is not ALL workload.
Not all software are optimised on ARM, compared to decades of optimisation on x86. Not to mention Compiler options and EPYC was running on bare Metal with NVMe SSD compared to Graviton 2 running on Amazon EBS.
And most importantly, you would be paying for 64 Thread on Amazon for the price of 64 Core Graviton 2. i.e It should be tested against a 32 Core EPYC 2 with SMT.
I still doubt G2 would win in the fair test, but it would be close, and it would be cheaper. And that is the point.
But up until recently Intel have had a manufacturing process advantage that has made it difficult / impossible for the likes of Google to source competitive high performance cores. That advantage has slipped away.
EPYC is an innovative product and comparing it with Graviton clearly shows that AMD has done a great job and that Graviton is not quite fully competitive yet (but not to the extent that those benchmarks seem to indicate, as others have commented).
I think its possible to overstate the energy efficiency gains from using ARM but all the indications are that ARM cores with fully competitive performance will emerge and that they will have some efficiency advantage - after all why would AWS be investing in ARM if not?
That's pretty silly. What makes you say this? There are large groups at Google responsible for buying every computer there is and evaluating the TCO thereof. With operating expenses exceeding two billion dollars per week, they have a larger incentive to optimize their first-party power efficiency than anyone else in the business. I'm fairly certain their first-party workloads (search etc) are the largest workloads in the world.
> With operating expenses exceeding two billion dollars per week
"Alphabet operating expenses for the twelve months ending March 31, 2020 were $131.077B, a 13.47% increase year-over-year."
Operating expenses is everything, including salaries, etc. Google isn't dealing with $2B/wk in power/server purchases, though they're still huge, no doubt.
Neither Amazon nor Apple are buying off the shelf ARM chips. They are both designing processors to meet their needs. Amazon bought Annapurna labs and Apple bought a slew of companies specializing in processor design.
I don't think they would switch even if they could in the near future.
The poster above says that Amazons "leapfrogs." They question is "leapfrogs where?" The fact that ARM cores cost 100+ times less than Intel, and are n times more power efficient was well known for the whole eternity.
What people don't get is that you get the whole platform on x86, and ARM is a very, very DIY thing, even if you are a multinational with lots of money on RnD.
> I don't think they would switch even if they could in the near future.
They've already brandished their ability to port their whole product from x86 to POWER[1], and deploy POWER at scale if they need to[2]. My personal interpretation of these announcements is they are made with the purpose of keeping their Intel sales representatives in order, but the fact that you don't also see them or anyone else brandishing their AArch64 port should tell you something.
I'd say less bluntly that Google is not as innovative as it once was. Old large companies ossify, and Google is not an exception. It failed on the social network (facebook), it failed on the instant messaging (whatsapp), it failed on the picture meme (snapchat), it failed on the video meme (tiktok), it failed on videoconference (zoom) ... you may see some kind of pattern there.
If asked whether google will succeed at something new (say, Fuschia), given those priors, my response will be: "no. it would be a surprising first in many many years. the company is on decline"
What we're missing is the connection between the services of the large companies: Google, Amazon, Microsoft all have an offering made of devices (hardware), websites (software) and cloud services. There seems to be a synergy, where you benefit from doing all 3 things in-house to reduce costs on your core product or to capture consumer minds. Microsoft is getting back in phones, with an Android offering. Amazon is not giving up on Kindle.
Notice how Apple is missing on the cloud services part here. They have some internally (for Siri) but they do not sell them.
Even if they don't start a cloud offering, they may sell their CPUs to others who will, before eventually rolling their own hardware.
This will give time to people who adapt existing server software to work better on Apple ARM CPUs (recompiling is the tip of the iceberg, thing about the differing architecture, what can be accelerated etc.)
We are seeing SIMD/AVX optimization for database like computation just now. It may take a while.
Apple is not missing out because it doesn’t jump on every bandwagon that is not part of its core competency. It’s still the most profitable out of all the tech companies.
Youtube requires a lot of server space compared to tiktok (<10 min means no ad money, so people make videos at least 10 min long!), and Zoom requires almost no space, while it can sell corporate subscriptions.
The only reason youtube still enjoys some success now is because it wasn't made in house, and the acquisition wasn't too badly managed. Grandcentral (parts of which still live as google voice) was a different story.
But it only shows how the last success google made "in-house" was a long, long time ago. The alphabet rebranding changed nothing. Since youtube, Google has turned into another yahoo for startup: a place they go to shrivel and die.
I realize the situation changes every time a new CPU comes out, but I have never personally seen a real workload where ARM won on energy efficiency and had reasonable performance. Tests like [1] and [2] showing x86 having an orders-of-magnitude lead on database performance vs. AWS Gravitron2 should give you serious pause.
1: https://openbenchmarking.org/embed.php?i=2005220-NI-GRAVITON...
2: https://openbenchmarking.org/embed.php?i=2005220-NI-GRAVITON...
If you're wondering why ARM needs to have both competitive performance and energy efficiency, see Urs Holzle's comments on wimpy vs. brawny[3].
3: https://static.googleusercontent.com/media/research.google.c...