Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Apple Silicon: The Passing of Wintel (mondaynote.com)
364 points by robin_reala on July 12, 2020 | hide | past | favorite | 600 comments


This is very bad for the US semiconductor industry. Intel is the only US company with state of the art fabs in the US.[1] (Global Foundries 14nm fab in East Fishkill, NY, formerly an IBM fab, maybe. It's owned by the Emirate of Abu Dhabi and is being sold to ON Semiconductor.) Intel is profitable because they have high margins on x86 family parts. That margin will now start to drop as Intel faces competition from commodity ARM processors.

The US already lost the DRAM industry, the disk industry, the display panel industry, most of the small component industry, and the consumer electronics industry. In ten years, the US won't be able to make electronics in volume.

[1] https://en.wikipedia.org/wiki/List_of_semiconductor_fabricat...


I'm no cheerleader for Intel but if Intel really wanted to they could release competitive ARM offerings. They've got enough money they could just buy an existing ARM developer and then produce those chips in their fabs.

They've managed to change direction in the past. They were all-in on Itanium and NetBurst until the Opteron showed up. They then kicked both to the curb and went all-in on the Pentium-M/Core microarchitecture.

Intel would be foolish to ignore the ARM market if for no other reason than the server market has started to change their focus on ops/watt rather than just ops/dollar. A 64 core ARM uses less power than a 64 thread Xeon with comparable performance.

That means the lifetime cost of that ARM chip will be much lower than the Xeon since it'll cost less to power and cool it (and the datacenter). Lower power requirements can also change the calculus of datacenter siting.

I don't see Intel getting to ARM ops/watt levels with their x86 designs. The Lakefield designs are promising but the "big.LITTLE" core configuration is more useful in mobile applications than servers.

Intel losing out to ARM designs is really Intel's choice at this point.


It’s crazy that Intel owned the best ARM design 15-20 years ago (Xscale) yet sold it a year before the iPhone shipped. Intel actually turned Apple away when they wanted an ARM SoC.

Both Intel and Microsoft made some kind of categorical error in 1995-2005 in assuming that their dominance was tied to a particular programming interface. Intel clung to the x86 instruction set while Microsoft operated on a “Windows everywhere” mindset, first pushing Win32 and then .Net.

Wintel seems finally dead, and good riddance.


> Intel clung to the x86 instruction set

Itanium launched in 2001 (per Wikipedia); if anything, I'd say they overestimated how readily they could ditch x86. Of course, it might be that replacing x86 was a good idea and it was just that Itanium sucked (I'd accept that possibility, given the whole "needs a smarter compiler than exists" thing). But I do agree with the general feeling that Intel and MS both overestimated how solid/entrenched they were.


I think AMD killed Itanium. I think Intel’s plan was to leave x86 forever stuck at 32- bit and have Itanium be the natural upgrade path as computers started running into the 4 GB limitations. I think it would have been a nice transition story.

However, when AMD released AMD64 it killed this strategy. You could now buy a chip that natively ran x86 32 bit at full speed, plus be able to run code that could address more than 4 GB of RAM.

Once AMD64 came out, Itanium was a dead architecture.


If anyone's interested, Raymond Chen did a series of posts on the idiosyncranies of the Itanium processor, starting here: [0]

[0] https://devblogs.microsoft.com/oldnewthing/20150727-00/?p=90...


I read it and found that write to R0 (zero read register) triggered CPU fault.

This means that there is some hardware that checks condition that will not happen during normal course of events - compiler won't generate such code and write to register that reads zero is harmless.

That hardware stays there, occupies space (does not matter how little) and drains energy (no matter how little).

This part of hardware is utterly meaningless and shows how priorities were skewed during design phase. It also shows why Itanium was such a bad chip - features were piled up instead of being cut down.

RISC-V did cut down on features first (compare division overflow in RISC-V and OpenRISC) and has a process to add them. This is why RISC-V is easy to implement even in OoO variant and why it is being picked up by everyone and their dog.


"...as computers started running into the 4 GB limitations..."

You mean 64 GB limitations? 32-bit x86 could handle that much, and I'm sure it would have without AMD64.


There is still the 4GB/process limit, even if with these extensions the total amount of memory in the computer could be larger.


Didn’t that require PAE and all sorts of workarounds, all for a not-that-great experience?


No workarounds required if it's enough just to have a lot of up to 3 GB processes (arbitrary, upper 1 GB was typically reserved for the kernel and memory mapped peripherals/IO, like GPU, USB, SATA). Of course if a single process wanted to access more, that would have required remapping or multi-process approach. Or special hacks like Windows AWE [0].

All in all, not ideal, but I'm sure we would have managed with it a bit longer.

[0]: https://en.wikipedia.org/wiki/Address_Windowing_Extensions


That is also what I believe, had AMD not been allowed to produce x86 clones after their legal dispute, and we would eventually all be using Itanium nowadays.


I fear, the Itanium strengthened the believe at Intel, that x86 is the one and only architecture.

The downfall of the Itanium had several reasons. They went beyond RISC with EPIC, a technology which had great theoretical promises, but had not been shown to yield those results in practise. Also, the project suffered from bad management and delays. Finally AMD pushed x86 just far enough forward to be a viable alternative.

In the beginning, the Itanium was an expensive, power-hungry monster, but it did perform quite well for heavy computations. It could have had a much better future, if Intel had not abandoned it - the software situation could have been much better, if for example Linux boards had been made available to all interested. The latest version only made it to a 32nm process, the transistor count being smaller than in a modern iPhone. So just redoing the same design in a cutting-edge process would make for a very interesting processor.


Intel misjudged that selling to businesses, as a processor for servers running usually Windows software, usually long-lived, special purpose, expensive, enterprise, etc. would be different from selling processors for consumer hardware like phones. There most software is mass-produced, cheap, has a short lifetime. So for the consumer, changing the architecture to something different like Arm is easier, because if fart-app #1 doesn't work, just use #2... Whereas with business purchases, the software is picked first, and the software vendor dictates the hardware, usually being very conservative and not prone to "experiments" like Itanium.

Intel misjudged both sides, for phones x86 compatibility was never necessary, for business machines, it absolutely is (for the largest number of customers, there have been exceptions).


Did Intel ever present a roadmap for getting IA-64 chips into commodity PC hardware? I recall at the time it was assumed to be their plan, but I wonder in retrospect if they were actually interested in disrupting their own low end products.


The original Merced was a performance disaster, if I remember it, and nobody wanted it.

HP saw the disaster coming and designed their own McKinley Itanium, but a bit later, and HP obviously didn't want Dell to have it.

The compiler support was not good outside HP-UX, if I remember correctly.

And then AMD64 happened, and Microsoft told Intel that there would be no further 64-bit PC instruction sets, and the market would choose between AMD and Itanium.

Cue the guillotine.


Had AMD not been allowed to produce x86 clones and the history would have been quite different, eventually everyone would be dragged into Itanium no matter what.


x86-64 did not only kill the Itanium but also all the RISC competitors like Alpha and especially Sparc. While x86 had pretty much pushes the alternatives from the desktop, if you wanted to run a 64 bit server, you had to go with the RISC architectures. When I joined my company in 2005, all the servers were still Sun Sparc and only slowly Opterons got deployed.


Itanium is credited with killing RISC market, IMO much more correctly than attributing it to amd64 - which mopped things up after the great Chipzilla got everyone to cancel before first Itanic started its maiden voyage to the iceberg known as "real world use"

Non trivial amount of know-how and - if rumours to be belied[1] - direct design then went from Alpha team at Compaq to AMD, though Intel retained some of it (AMD already had been doing cooperation with Alpha, leading to the funky case of "jig it enough and an Athlon-MP motherboard might just run an Alpha and Athlon at the same time").

[1] - Once heard from semi-reputable source that an early K8 changelog entry contained "dropping support for VAX floating point formats". Unverifiable unless someone somehow publishes those docs, but it did fit with "That opteron looks suspiciously similar to EV7"


Which is a pity, Solaris SPARC is currently the only server UNIX that makes proper use of hardware memory tagging, while neither AMD nor Intel managed to make a proper implementation of MPX.


do you have any other details about that, like a comparison or technical details?

it sounds interesting



wow, thanks!!


Itanium didn't fail because people wanted x86, it failed first and foremost because it sucked badly.


Maybe correct, there was even a 32bit x86 mode, it was just ridiculously slow.


I believe that was software that did the emulation. I actually had a dual Merced system that I ran linux on for a while. It didn’t suck so much as it was “different.“

Intel should have known this, but you buy processors because they are faster or cheaper. These days the third rail is energy but it’s basically the same, if you pay more you want a lot more and “Different” costs. Itaniums where a whisker faster than their ia32 offerings at the time, they cost a lot more though, and they were different. Beyond that, it was like a geek designed the specs with out a market story, risc chips were coveted by x86 dorks because they had 32 registers to the x86’s 5 or 6, well ia64 had 128... If you’re just doing simple stack frame compiler work without full program optimization, how many functions need 32 registers? Let alone 128? everyone could see that you wanted to simplify things in order to put silicon where it mattered for longer term performance, mid to late 90s reordering conditionals for branch prediction was sort of hot in compiler dork and low level circles, well Merced fixed that by adding a lot of complex logic to take both branches and retire the wrong one. IBM, HP and Sun needed different reliability and serviceability features on their high end systems too, from the big enterprise metal perspective they were toy parts for PCs, but they cost more, didn’t run your existing software and weren’t 2x faster or anything like that. They didn’t take a holistic market view as they made it. Rather than work with ibm, sun and hp on those high end issues, they just went back to the lab and told them when the first parts would be out.

Not really sure what goes on inside intel, they have a lot of brilliant people. I suspect that they cannibalize themselves because nothing will ever be as successful as the x86 was when they owned every aspect of the part and package and things are decided to be a failure before they ship. With ia64 nobody asked why. Xeon phi, they gpu and machine learning stuff reflects that too.


You can make use of a lot more than 32 registers, but not with existing RISC or x86 instructions. If I remember correctly EPIC had global registers like your normal RISC, local registers like SPARC and support for batch renaming registers to implement software pipelines. Software pipelining can make use of a lot of registers and exploit the high IPC EPIC was supposed to provide. Still an ISA that avoids the pipeline prologue and epilogue could make software pipelining useful for a lot more algorithms ... in theory.


ARM essentially toyed with VLIW with Thumb and Thumb2.

As I understand it, these were influenced by SuperH, a 32-bit RISC design that uses 16-bit instructions, so two are always executing at once.

The busybox-armv7m Thumb2 binary is the smallest of all the compiled architectures.

Thumb2 appears to have been much more practical.


There’s nothing VLIW-like in Thumb or Thumb2. It’s just a more compact encoding.


VLIW stands for Very Long Instruction Word, not variable length.


> Both Intel and Microsoft made some kind of categorical error in 1995-2005 in assuming that their dominance was tied to a particular programming interface.

Actually their judgement there was correct. There dominance absolutely was tied to monopoly power, via "particular interfaces". Working on alternatives would have undermined the monopoly.

They error was thinking the monopoly wouldn't have eventually been undermined anyways. They upped the risk for more reward, and got bit.

This incidentally is another reason why monopolization is bad: even when it doesn't last indefinitely, it makes for shitty companies that refuse or cannot adapt, and will be stubbornly stick to their ways until more sudden failure. Stagnation and then a jolt is less growth, and irrespective of growth bad for society. No government should want this.


I think this signifies that the management has failed the company. Taking safe bets instead of taking risks. There is no reason Intel shouldn’t have embraced both architectures. They had to know x86 is never going to be a low power device. Apple wasn’t the only company that saw mobile coming.


But notably, Intel still have an ARM architecture license and thus can still design their own ARM cores if they so desire.


I believe they lost their license when they sold XScale to Marvell.


From https://www.forbes.com/sites/rogerkay/2012/10/31/amd-jumps-i...:

> In fact, Intel pointed out to me in the wake of the article’s publication that it possesses an ARM architectural license, the highest level of relationship with the British intellectual property (IP) licensing firm, which Intel retained when it sold its communications product line to Marvell in 2006.


Apparently the financial planning model Intel used to justify turning down Apple in that case had a math error (dividing by 1+X instead of multiplying by 1+X) that actually swung the analysis in the wrong direction.

Given the resulting impact (Intel being far behind in the smartphone market), it might qualify as the costliest Excel error of all time.


Otellini mentioned in one interview that Intel's projected cost was wrong, but I've never seen anyone at Intel go into specifics as to how it was wrong.

>At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it. It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought."

https://www.theatlantic.com/technology/archive/2013/05/paul-...


I think it was mostly the greed of Business Development people at Intel who thought why build a chip with such low margins, what they didn't see was that mobile revolution was about to take place and billions upon billions of ARM chips would be shipped in mobile devices around the world.


See also the rumoured refusal of DEC to drop margins on the Alpha when Apple was shopping around for a replacement for the m68k.


Source? That would be a fascinating read.


.... That's like literally trillion dollar mistake


given Apple's near monopsomy power in the mobile market Intel may never have been able to produce high margin chips in that market.


Yeah, it never would have been that profitable to be an ARM vendor for Apple. I couldn't even remember who made the chips in the iPhone 1, I had to look up that it was Samsung.

If Intel managed to convince Apple to go x86, and they had low power chips competitive with ARM at the time, then that would be a different story.

You could even argue that it was a happy accident; Intel providing a high performance ARM core may have forced mobile competitors to react, producing high performance ARM chips of their own, and sped up the erosion of the x86 monopoly we're finally witnessing today by several years.


Intel refusing Apple's business and not moving into handheld mobile devices put a whole lot of R&D money into the pockets of Intel's rivals.

I doubt TSMC would have been able to afford to catch up to Intel without their current high volume business.


If they stayed with Intel for a very long time, I agree. But I don't think that would have happened.

Apple pursued custom silicon very aggressively from the start, with the acquisition of PA Semi less than a year after the launch, and the first custom chip coming in the iPad in 2010.

Although with an Intel chip they wouldn't have needed to do this for performance reasons, Intel's fat gross margins would have been equally compelling.

If Intel was willing to design the chips and take TSMC sized margins on them, I guess they could have kept the iPhone business for years, but that isn't in their DNA.

I can't see Intel ARM iPhone chips as anything other than a shortly lived oddity.


My assumption is that if Intel had gotten serious about mobile devices, Apple would not have been their only customer.

By sending all that business to TSMC and Samsung, they were financing their rival's R&D.


Getting serious about mobile devices means dropping their historic 60% gross margin target.

If they were willing to do that, I totally buy in to your scenario. It would have been the right play with the benefit of hindsight.

But it would have represented the biggest shift in their business model since they dropped DRAM. Just agreeing to fab the iPhone 1 CPU wouldn't have been nearly enough.


In a 2013 interview, Intel's Paul Otellini had already come to see that refusing to enter the mobile device market was a mistake.

>It was the only moment I heard regret slip into Otellini's voice during the several hours of conversations I had with him. "The lesson I took away from that was, while we like to speak with data around here, so many times in my career I've ended up making decisions with my gut, and I should have followed my gut," he said. "My gut told me to say yes."

https://www.theatlantic.com/technology/archive/2013/05/paul-...


Right, but I don't believe this is evidence they'd have been willing to do what it takes.

Keep in mind that they didn't quit their Xeon Phi strategy until 2017, four years after that interview. The parallels are strong.

It's an equally valuable market to mobile, but they weren't willing to get their hands dirty, build a commodity GPU, and either a CUDA bridge ala ROCm or concerted effort on bringing OpenCL up to speed.

Instead, they tried for a decade to bring x86 to GPGPU, not for any technical advantage, but because if they can make it stick they can be an (at worst) duopolistic vendor.

They started with in order cores little better than a Pentium with massive vector units, and progressively improved them to Atom cores once it became clear how much work you'd have to do to port legacy x86 code on to them, at which point why not just port to CUDA?

Now the Xeon Phi torch will be passed to... absolutely standard Xeons with HBM. And they're building that general purpose GPGPU, but it's going up against a version of CUDA that's had a decade longer to mature and a nVidia with 102% of their market cap, instead of 9.9%.

The reason for this insanity is Intel realized just how profitable monopolies are, and they're willing to make risky (almost stupid) bets in the hope that they'll yield their next one. They're searching for the home runs with little enthusiasm or staying power for anything else.

This is the Intel I know, and it's not the kind of Intel that's willing to take crappy gross margins to fab commodity ARM chips. It's the kind of Intel that's going to try to do everything in its power to cram so very out of place Atom processors in to mobiles in the vein hope they'll catch on through a combination of incentives and temporary consumer insanity.

Which is exactly what they did try instead of the ARM thing, from 2008 to 2016. RIP.

If Intel did make ARM CPU for the original iPhone and other early mobiles, it would have followed the same trajectory as Intel's original SSDs. They'd be years ahead of their competition at launch, but left to stagnate as the rest of the market catches up, once Intel realizes they're actually going to have to work for their money with no sustainable advantage.


Absolutely agree with all of that. As to that interview:

- He's probably flat-out lying, poopooing the data driven stuff to come off more human, humble, and sympathetic to the interviewer.

- If the numbers really did say they should stick to their monopoly, huge extra revenue be damned, then this is some model the FTC and "...but consumer prices!" crowd needs to be forced to grapple with.


Do you have a source for that?


Source?


Sounds to me like a joke based on https://en.wikipedia.org/wiki/Pentium_FDIV_bug


Intel deserves it 100%. No tears shed.


Is there a need for .Net anymore?

Even Microsoft proved that they could build a featurific program without .Net, in VSCode. It’s an electron and JavaScript application. And, they were able to make it run everywhere.


Yes, it’s obviously needed, especially with the introduction of .Net Core. And seriously, comparing C# or F# with JavaScript is ridiculous. Typescript on the other hand is quite good, but its compiled JavaScript performance is much worse than .Net.


You can make everything in anything, and it doesn't prove anything!


>They've managed to change direction in the past

Worth pointing out the two directional changes That Intel made, from DRAM to Microprocessor and IA64, NetBust High Clockspeed to Pentium M IPC. The first one was Andrew Grove, the 2nd one was from his disciple Patrick Gelsinger.

Those people are gone. Retired or pushed out by Intel's politics. Their legacy and work continues for about 5 years, as that is the lead time of their roadmap. And we saw what happen to Intel since 2016.


[flagged]


lol no


>I'm no cheerleader for Intel but if Intel really wanted to they could release competitive ARM offerings. They've got enough money they could just buy an existing ARM developer and then produce those chips in their fabs.

The problem is that Intel no longer has the leading state of the art fabs. They used to be two generations ahead of everyone else, and now they're two behind. The inertia of x86 has kept them afloat, but the longer Intel fabs lag the more ground ARM has been able to make up.

Intel fabbing ARM parts isn't going to do them much good unless they also manage to get their process shrink delays worked out and behind them.


Intel does have manufacturing issues at smaller node sizes but that's a really complicated thing to measure, node size claims from various semiconductor manufacturers are more marketing than anything else now.

There's been some suggestions to replace node size claims with better density metrics https://ieeexplore.ieee.org/document/9063714. It's not like TSMC's fabs are some order of magnitude more advanced than Intel's fabs. If Intel had an ARM chip to produce today they could use their considerable fab volume to produce them profitably. Even at large process sizes they could still produce competitive ARM chips. It's not like their current x86 chips at 14nm are slouches.


> I'm no cheerleader for Intel but if Intel really wanted to they could release competitive ARM offerings.

Except that everybody seems to forget that Apple's ARM chip isn't commodity.

Apple is going to tune their ARM specifically to run portable workloads. For starters, they're going to have humongous caches that make the chips expensive.

And they're going to bury the expensiveness of the chip in the enormous margins of iDevice ecosystem.


There's no indication that Apple's Silicon will be more expensive than what's used in iOS/tvOS/watchOS devices currently.


Expense is related to chip size.

Apple isn’t going to release chips that benchmark slower than the Intel equivalents. To sell the transition they’ll need to say something simple like “2X faster” on the big screen. So I expect to see 8+ core chips as standard even if they don’t need that many to deliver a good product.

The question is whether such a chip would be cheaper for Apple.


They don't need to be 2x faster than any Intel chips, just the ones in products they're replacing. And as it so happens it's pretty easy to get 2x the performance over an essentially passively cooled and severely power limited Intel chip that's in the air.


Absolutely. And I don't think it will be difficult at all for Apple to crush Intel in benchmark performance per watt—let alone real world performance with their more targeted silicon.

The interesting question is going to be how Apple's GPU stacks up to the competition. We know from the A13X that it is already quite good and I fully expect it to exceed Intel embedded graphics. But what about higher end machines? How long till Apple GPUs can take the fight to discrete graphics?


Why Apple cannot use discrete graphics? We don't know that yet. NVIDIA already starts to have support for aarch64 so you can expect AMD not that far away.


They might, but in this WWDC talk we get some pretty strong tea leaves that they wont.

https://developer.apple.com/videos/play/wwdc2020/10686/

I'd be surprised to see discrete graphics in anything below an ARM iMac/iMac Pro.


The video explicitly states that Apple Silicon based Macs will use a shared memory approach between CPU/GPU which is a strong indicator they will go all-in on a combined chip or at least on-die approach.

My bet is that for the iMac & MacPro, Apple will use custom AMD on-die GPUs (like the now discontinued Kaby-Lake G). They just haven't announced it yet and AMD seems to be silent about it. I also bet the demo they showcased of Rise of the Tombraider was running on such an unannounced custom chip.

I you think about it, it doesn't make much sense for Apple to enter the highend segment for GPUs and sink a significant amount of research cost into it. What they need is power-efficient GPUs for their mobile line (iPhone, iPad, Macbook Air & low end Macbook Pro) so that is where an all-Apple GPU design will be used. But for professional high end use, they will not be in the position to ship a GPU (no matter if on-die or discrete) that could compete with AMD or Nvidias performance hogs out of nowhere.

Market share for high-end Macbooks + iMac + MacPro is just too little given their overall product portfolio (including mobile devices) and the research effort too high. Also, AMD would probably happily design them a custom GPU chip for on-die usage, just as they do for consoles and did for the Kaby Lake G. It wouldn't hurt AMDs business, even if Apple plans to replace AMsD with custom all-apple based GPUs in the long run (i doubt it), as it is very unlikely Apple will ever license it to other CPU vendors or even enter the market for discrete GPUs for Wintel machines.


The demo was on the same hardware as in the dev kit they released. They stated this publicly.


You sure about that? Because other parts of the demo were absolutely not run on the DTK.


Craig Federighi confirmed it during a post WWDC interview.


> How long till Apple GPUs can take the fight to discrete graphics?

I already play Fortnite, Call of Duty, PUBG, etc at 60fps on my iPhone.


Mobile equivalent of AAA games is hardly as demanding as their desktop counterparts


Of course, but that was my (and presumed common) use case for discrete graphics.


NO! StarCitizen i a good case for discrete graphics :-)


Apple are going to release chips? I think they are going to be releasing iPhones and iPads and laptops. To sell the transition, they'll call it iPhone N+1. "2x faster" is the sort of marketing their competitors are stuck with, right along with "cheaper" and using the Microsoft, Windows and Intel brand names rather than their own.


Apple has consistently used “2X faster” language whenever they can, and ALWAYS when justifying a major product change.

For them to not do this would be strange.


Performance is the least of my concerns owning an Intel laptop. Intel's obsession with it is why their laptops sound like jet engines, scorch laps and have lacklustre battery life. If I was Apple, I would advertise the the improved quality of life from using an iPad-like laptop.


The A13 Bionic already has six cores, so expanding that to eight seems trivial in cost. The last time I read the BOM for iPhones, the ARM portion of the SoC was around $50. What are Intel laptop CPUs selling for?


That BOM doesn’t include r&d either. Something which the intel chips baked in.

Arguably the r&d costs would still exist for just their iPhone line. And I’m sure the cost to add a Mac line isn’t as much as the intel premium


> For starters, they're going to have humongous caches that make the chips expensive.

Literally everyone has humongous caches. In fact Intel cache architectures are far larger and more elaborate than anything on an Apple chip.


Exactly, the ARM core is not the most interesting part in itself. There's a lot more to a CPU (let alone a System On Chip which is what the Apple A series is) than the instruction set.


Will Intel’s management take the capital risk to gain a fab advantage again? Or will they ride the way down from their peak for 30 years with everyone on the top hopping off with golden parachutes?


>They've managed to change direction in the past

Did they have such competition in the past, plus their main market (desktop) becoming irrelevant and mobile (and maybe server) going to another CPU?


While desktops specifically are seeing declining sales, laptop sales have remained fairly strong in recent years. Intel still powers the lion's share of laptops sold be they Chromebooks, MacBooks, or Windows laptops. So it's not like their sales are instantly going to dry up. ARM is also still making progress in the server space but it's not predominant. Intel is still powering a lion's share of server hardware as well.

So Intel's got the income to make big shifts. There's also enough momentum of continuing x86 sales that their revenue from them won't dry up immediately if they did start building ARM chips.


>While desktops specifically are seeing declining sales, laptop sales have remained fairly strong in recent years. Intel still powers the lion's share of laptops sold be they Chromebooks, MacBooks, or Windows laptops. So it's not like their sales are instantly going to dry up.

I don't think absolute numbers are what matters. For a company in the stock market, the fact that mobile has grown 10x the size of desktop sales, where the desktop has become stagnant means they lost big on a market...


Intel has huge gross margins on their chips. They make up the difference in sales numbers with their margins. They've had minimal competition from AMD in the desktop/laptop and server markets so they've made money hand over fist.

While they could be a player in the smartphone CPU market it's not like they had it tied up and lost it. Intel's real challenge now is recognizing that the desktop/laptop and server markets are no longer guaranteed sales for them.


Big.LITTLE has always somewhat baffled me as to how well it works. Is my phone actually good at assigning low priority tasks to the little cores? It's always seemed unlikely given how half-assed it seems to be when it comes to much easier things like not trimming the app I'm currently working with for using too much power or taking less than five solid minutes to notice I've driven away from my house's wifi and it needs to use 4G instead.


>"They've got enough money they could just buy an existing ARM developer and then produce those chips in their fabs."

Is a developer in this context a third party who goes out and licenses the different IP Cores for a customer and then designs a chip using them for that customer? If so I'm curious why would someone use a third party developer over ARM itself? Who are some of the bigger ARM developers today?


"Intel losing out to ARM designs is really Intel's choice at this point."

Historically this seems to be the case most of the time. Market leaders miss the next iteration of their industry. The "Innovators Dilemma" is ever present.

"This 'telephone' has too many shortcomings to be seriously considered as a means of communication. Western Union internal memo, 1876.


What if ARM just refused to sell them a license?


As far as I understand, Intel has a license already.


for an very old version


They have an architecture license. Which means they are free to design their own ARM designs.


Does the architecture license just allow them to create a CPU that uses the ARM ISA? Or does it include other ARM IP as well?


limited to an very old architecture no one is using anymore.

ex-ARM employee here.


I thought the point is the architecture license lets them design their own?

Also Intel clearly has some sort of license that covers modern cores as well, given it manufactures FPGA's with various newer ARM cores based on its purchase of Altera.


Why would they ?


It's not about the instruction set/architecture. It's about the process node.


What do you mean? This is just about nm?


AMD, Apple, NVIDIA and those AWS chips use TSMCs 7nm process node which has a ~50% efficiency advantage over Intel's 14nm (the actual nm isn't comparable though).


Debating which ISA enables higher performance designs has been shown in the last 25 years to be a lot like debating which JIT API will result in faster execution.


No, it's also about system and memory interconnects, caches, various accelerators, core configuration, and indeed process nodes as mentioned above. I don't think there's anything inherently faster in one ISA than the other.


There can be ISA-level differences: Itanium’s VLIW structure meant a lot of memory was storing NOPs filling unused slots unless your work aligned perfectly with the CPU and you had a brilliant compiler.

These days, I think you’re right with the possible exception of density: if you have comparable resources, vector extensions, etc. an ISA which uses memory more efficiently is going to get a little more out of its cache and memory bandwidth, but that’s probably not going to be huge.


Then I fail to understand how is Intel's perf/watt behind some ARM vendors. They probably have the best talent in the industry.


You're leaving out the part where management at Intel has been a complete shit show for over a decade.

> In ten years, the US won't be able to make electronics in volume.

Quite possibly, but if this happens, it'll be Intel's fault. Expecting their 500% margin on x86 to last was complete lunacy. They've had 40 years. Anything they manage to invent on the architecture side will not maintain their margin in a world where ARM is good enough. That margin is never coming back.

They need to get their act together, swallow their pride, and learn how to become a foundry like every other state-of-the-art fab on the planet.


I noticed this when some folks were trying to sell safe internally https://www.scaledagileframework.com/case-study-intel/


It reminds me a lot of Boeing: they get an entrenched market that is very profitable and then get lazy.

IMHO the problem is in the C suites, period. The executives have not provided enough leadership, nor have they recruited others to do so if they themselves are not able.

We still have a ton of great semiconductor design. AMD, NVidia, and Apple's semiconductor division are here. They just don't fab here.

What we have seen with China is that if manufacturing leaves eventually R&D and design will start to leave too. The manufacturers realize they could own the whole shebang if they stand up their own design, product, and marketing, and the US has no talent monopoly.

I am waiting for a Chinese laptop that is built to Apple quality or better, has a fast ARM CPU, and can run either Linux or Windows on ARM. That could be disastrous for the domestic PC industry or what remains of it, especially if Apple flubs the ARM switch.

Edit: Samsung could do it too. They have a sleeper line of ARM laptops, but they are underpowered. Put out one of those with 16-32 cores, 16-64GiB of RAM, 1TB+ SSD, a (maybe optional) high end GPU, and that can run ARM Linux or Windows and Samsung could have a customer. If they invested in helping Linux desktop work very well on the machine they could take a lot of developers from Apple... or if MS removes their head from their posterior and stops stuffing Windows with shit. MS could offer a reasonably priced "no BS edition" without foistware or needless telemetry and I would consider it.


When the MBA guys and Wall Street starts drinking their own cool-aid and believes that finance is what creates money - you have a problem.

Lee Hutchinson of Ars Technica once wrote that a junior engineer now at Boeing has no viable path for promotion towards the C-suite. The middle rank engineering teams have all been divested and the only way to rise up the ranks is to actually leave Boeing, work at a different company, rise up the ranks there and come back a decade later.


I grew up in Wichita and knew a few aerospace engineers as friends when I was still a teenager. The revolving door between the big companies is well known and has been going since the 90s at least. They told me pretty clearly that you wanted to be jumping companies every couple of years because the poaching offers were always way better for advancement than demonstrating loyalty.


As I've said a thousand times: the sorts of MBA types you are describing here are the US equivalent to Soviet "apparatchiks."


This is the best kept secret in corporate America. The McKinsey and private equity crowds like to think of themselves as these innovative and wild capitalist cowboys, when really they're just a bunch of Soviet rent-seekers with better haircuts.


To add to the metaphor - for many behind the curtain - the soviet aparatchiks came imposed from a foreign country - Russia.

The only difference is that you're less likely to be sent to Gulag, or worse.


US apparatchiks come from a foreign country of sorts too: coastal elite cities and coastal top-ten universities. That world is as far from most of America as Moscow was from Ukraine.


Just because you're a good engineer doesn't immediately make you a good executive.


When C-level people in an aircraft company start to lack engineering background en masse, we see what happens :( Boeing was once great.


Steve Jobs had a great explanation for this phenomena. Companies with monopoly power do not require or benefit from product development in any form on most reasonable investment horizons. In practice they can only benefit from increasing their market via marketing, or increasing profits by financial engineering. Eventually the promotion process of the company will leave you with Sales, Marketing, and Finance folks running the show.


Do you have a source for this? I’d love to read more about it.


This is the clip, but the whole interview is worth watching: https://www.youtube.com/watch?v=P4VBqTViEx4


No, but in an industry as technical as semiconductors, I'd like to see the evidence that default-hire-MBA is ever better than default-hire-PhD.

AMD and TSMC both have always had technical leaders. Intel had a series of legendary technical CEOs during their most formidable years.

The first non-technical-PhD CEO they had was Otellini, but he had been there for 30 years before becoming CEO.


Of course not, but if none of your execs have any engineering chops then you're setting up for a world of pain.


There will only be a discussion if you define what "good" means first.


AMD is what happens when you have someone who is both.


They don't get lazy.

Think of it like a career change at age 42.

Imagine being married with a couple of children and at 42 you tell your wife that you are going to spend all of your free time, and a lot of your money working on becoming a lawyer. You want to do this because you think your current career is topped out and you want to be prepared for the future. In 6-8 years you'll be ready for the career change, but you'll need to start at the bottom of the laywer pool and work your way back up, your expenses won't change, nor will your obligations, just your income, and it will go down substantially.

I worked in a startup and we had a product that cost $1k per seat and it was full featured and slick as hell, but we started getting competition from a company that was selling a bare bones product for $99 a seat. Our customers really liked their price, but they wanted our features, in the end price won and we started losing business to the competitor.

Because of my company we actually legitimatized a market for an certain type of after market computer cards, we spent a ton of time and money helping to build the surrounding ecosystem so that our product could have a market. Once we invested all the money, the competitor came along and could ride the ecosystem we built, but we still needed to cover all of our costs, hence our high price.

I spent a ton of time arguing with the BOD about pricing and volumes, but they had financial models they had to meet and an $89 product had no place, luckily for us we got acquired because of our expertise and the work that we did to create the market. If we hadn't been acquired we would have been overrun by our competitor, and it had noting to do with lazy.


The fault lies entirely on your previous company's management.

As Buffett says, "price is what you pay, value is what you get." Your ex-co failed to provide reasons why customers should pay for the value it offers.

Apple excels at this, despite naysayers saying that a premium consumer electronics is a "bad business".


They excel at it from the point of view of Tier 1 countries.

They want to stay the Ferrari, Bang & Olufsen of mobile phones and laptops.


>What we have seen with China is that if manufacturing leaves eventually R&D and design will start to leave too.

Yep, Apple et al. are reaping what they sow here. All their employees go find suppliers in Asia for their hardware. The travel schedules are so grueling that you basically have to be from the country/province where the supplier is located for it to be a tolerable job (free trips home!). So basically, we have Asian engineers seeking out Asian suppliers on behalf of an American company where only the top few levels are American. The chance those Asian engineers are going to cut out the American company in 10 years is high.


I know plenty of western-born people making regular trips to China and in charge of sourcing. Not sure where this claim is coming from but it doesn’t feel grounded on the reality I observed first hand. You realize Apple has something like 50k employees in the Bay Area. There’s very large teams responsible for hammering out the deals and engineering teams responsible for finding suppliers.


Apple has (well, had) many seats booked on Bay Area-China flights every day for people traveling between the two.


I think this statement is, in general, completely true. But it doesn't actually seem to apply to Apple, which continues to design many aspects of the iPhone. In part this is because Apple essentially funds and builds many of the factories it "outsources" to, which very few companies can afford to do.


This is an important point. When hon hai builds the plant they have to capitalize it over 40 years or so (per whatever the local equivalent of GAAP is). For Apple it’s a current expense. Now most companies would prefer the longer capitalization period (it will increase earnings) but but with apple’s huge cash reserves and almost by definition nothing better to invest them in than Apple growth, this lets them take advantage of the investment right away, then move elsewhere if the numbers look better.


What does capitalization over 40 years mean? They amortize plant costs over 4 decades? That sounds crazy because in this industry I would assume a plant would need to be completely retooled every 2-3 years.


Sorry, that was poorly written out. Under GAAP the building itself would typically have to be amortized for decades. A lot of the physical plant would need a lot of explanation to justify a short amortization period too (although not 40 years!) given that plenty of decades old plants still churn out chips at large node size. And There is, as I mentioned, an asymmetry in accelerating vs deferring the expense in terms of sector-specific stock market perception.

But I was lazy in conflating the two cases so thanks for pointing that out.


Just realized that while the ARM transition makes technical sense it probably increases the odds of Apple being commodified. Using X64 makes it at least somewhat harder for a competitor to do this by offering competitors fewer choices and higher component costs.

By pushing the PC, the nucleus of all development, to ARM they are helping push the whole software industry onto a heavily globalized platform with many vendor choices and generally available IP.


I assume it’s the double justification for Apple’s marketing to have chosen the phrase “Apple silicon”. One of course is to steer the discussion away from the ARM architecture itself and any sense of commoditization. This was a problem with the intel transition, though they were in a tighter situation so simply didn’t message around it ... and that was fine.

The second justification though is that the CPU architecture is less significant on the system scale. The SOC really is a ”system” (we used to predict this in the 1990s but couldn’t get there then) with a lot of other some what specialized functional units designed to work together.

In that the A series are more reminiscent of the mainframes of old, where the CPU really was merely the central processing unit and a tremendous amount of processing went on in other processing units.


What wasn’t commodified to begin with about the PC industry? Or later, with the smartphone industry? There’s Apple and then there’s the commodity alternative with more market share globally etc.

To me this is Apple trying to stay ahead of ARM Chromebooks and the desktop-ification of Android/Chrome. If left alone another half decade, it’s possible that Chromium via either Google or Microsoft could have made more of a dent in the wider market than just education uses. Adobe is already facilitating creatives to cheaper Windows workstations.

Frankly, Apple investing in unique silicon for their software simply means it will be more unique, just as the iPad has defended its place through an unbeatable software ecosystem and “cheap enough” hardware. If the Mac can go “cheap enough” to iPad pricing, this is Apple carving out its niche and market for the decade to come...


To me this is Apple trying to stay ahead of ARM Chromebooks and the desktop-ification of Android/Chrome.

Right. ChromeOS is getting interesting. At first glance it seems to be even more limited and appliance-like than iOS, but now you can run both Android and Linux software in containers, which opens up many more potential uses.


Before Boeing bought McDonnal-Douglas, Boeing had a lot of executives with engineering experience. After the merger, the MD executives took over, move the HQ to Chicago and started running it like MBAs.


> We still have a ton of great semiconductor design. AMD, NVidia, and Apple's semiconductor division are here. They just don't fab here.

I advise you to see how huge are the overseas captive outsourcing centres of all of said companies are.


>The US already lost the DRAM industry, the disk industry, the display panel industry, most of the small component industry, and the consumer electronics industry. In ten years, the US won't be able to make electronics in volume.

That's how old empires fall and get replaced by new - and this while everybody in the old remains complacent...

Most of this is out of greed - the corporations got bigger margins if they outsourced production. And the public could short term buy more for less - to the detriment of the local working class, and ultimately, the middle class. And domestic stuff like rent, health, education, etc, would remain expensive.

First you lose the factory jobs, then the factories, then the readily available infrastructure to build one soon as you need it, then the investments to new technology, and ultimately, even the know-how (and meanwhile, lots of patents also go elsewhere, to your subcontractors).


That's the story the left tells to create outrage to get votes; it provides an illusory sense of control and redress.

"Corporations" are profit-seeking for the entire lifetime; there isn't a point where the making-money comes to the fore and ruins everything.

Automation explains almost all loss of manufacturing jobs. America today manufactures more than it ever has, with very few jobs needed. A car factory today requires a few hundred people, rather than many thousands.

As for why temporary monopoly companies become less innovative, the answer there is somewhat obvious: because innovation isn't the best profit-making strategy. But that's fine: hence the breakdown of temporary monopolies via competition.

Then there's the question of "why china" is where genuine competition comes from, and that's largely due to economies of scale and large pools of talent. The labour costs are secondary for the capacity to automate and integrate supply lines.

America is a massive country with low density, poor infrastructure and limited talent base for relevant skills. The latter is a choice of its citizens to educate themselves in a particular way. The former is the choice of its citizens to vote for particular government programmes: low-density sprawl, and tax-cuts.

If you want to "bring manufacturing back" (whatever that means) you need significant tax raises to build and subsidise significant amounts of infrastructure and subsidise education and training in the relevant skills.

Of course, that isn't as satisfying for a voting for a rable-rouser.


> If you want to "bring manufacturing back" (whatever that means) you need significant tax raises to build and subsidise significant amounts of infrastructure and subsidise education and training in the relevant skills.

IMHO large investments in infrastructure and education are the playbook China has followed for the last 40 years to become the world's biggest factory.

In China's case, they didn't fund it by raising taxes on an already-wealthy populace, but rather by going "lol, workers' rights?" and forcing an entire generation (or two) work 12 hours a day for 6 days a week (996) making pennies per hour. [1]

The PPP of Chinese citizens has gone from ~$1000/year in 1990 to ~$17,000/year today. [2] That's a 17 fold increase, compared to just ~2.7 for the US over the same period. [3] I highly doubt any employee in a country like the US, UK, or Germany is going to accept a 996-style work schedule (in fact it would be illegal in many, if not all, of these jurisdictions) and especially not for the current PPP of a Chinese citizen.

> Automation explains almost all loss of manufacturing jobs.

This is also applying to China now. You can't have labour costs increase 17x over 30 years and not consider automation to reduce your costs. However, since China has historically been the destination of outsourced manufacturing, they have built up supply chains that the US used to have back when the US did "manufacturing."

Why would a profit-driven corporation spend money to bring the supply chain back to the US when there is no economic reason to do so? To me, this is a very large issue, separate from "infrastructure" that the US government has yet to seriously discuss. Raising taxes is a politically toxic move in many political systems, and it's incredibly unlikely a corporate entity would voluntarily choose to forego profit to help an OECD country re-develop its lost supply chains.

[1] https://en.wikipedia.org/wiki/996_working_hour_system

[2] https://data.worldbank.org/indicator/NY.GDP.PCAP.PP.CD?locat...

[3] https://data.worldbank.org/indicator/NY.GDP.PCAP.PP.CD?locat...


> forcing an entire generation (or two) work

Err... no. 996 is a recent culture of work in Chinese IT.

The reason subsitance-living agricultural workers moved to cities and factories is the same reason they have always done so in history: farming is dangerous, risky toil.

A wage is highly stabilising, and highly attractive. You do not need to "force" people from farms into factories.

> Why would a profit-driven corporation spend money to bring the supply chain back to the US when there is no economic reason to do so?

The US can engage in the same state-funded programme of infrastructure investment, skills subsidy, and massive sector-targeted tax cuts. That it chooses not to is the explanation of why the US isn't competing successfully. NOT "996".

Its just scape-goating to blame "the despicable Chinese!" when at the same time you're doing nothing to compete with them.


>That's the story the left tells to create outrage to get votes; it provides an illusory sense of control and redress.

The left? That's just reality, not a left/right thing. I think it's now common sense in both the left and right. Heck, Trump (which one would call right) admitted more or less the same (that globalization lost America jobs and sending things to China was a bad idea).

>"Corporations" are profit-seeking for the entire lifetime; there isn't a point where the making-money comes to the fore and ruins everything.

That corporations are profit-seeking for the entire lifetime is neither here, nor there. You can be profit seeking and selling good products and fostering a healthy community, or you can be profit-seeking and fucking customers and polluting and so on.

So, there might not be a "a point where the making-money comes to the fore and ruins everything" but there's absolutely a case that how a corporation goes about making-money can ruin everything.

Which also means there a point in time, where corporations were let (culturally, legally, etc) to run wild with money-making to the detriment of other things (society, externalities, etc).

>America is a massive country with low density, poor infrastructure and limited talent base for relevant skills.

China is an even more massive country and didn't have an educated workforce (tons of farmers etc) before it began investing on building one. As for the population density, that's a red herring - China got workers brought over from the countryside to work in big cities, and America can do the same.


Insofar as trump repeats the same tropes, he's being quite leftwing. Protectionism and mercantilism are typically leftwing positions as they are contrary to free markets.

> before it began investing on building one.

That is my point. Americans chose not to do that. A loss of manufacturing didnt "happen to america" like some crime. American society chose to be educated differently, chose to spend and raise taxes differently, and presently, continues to choose to do so.

The "blame china and outsourcing" narrative is just misdirected scape-goating. Trump's repetition of it is a pretty strong signal of its mumbo-jumbo narcissim.

China has invested in its infrastructure to an incomparable degree. The US has not, does not, and will not. It is your fault.


I don't think Trump is repeating the same tropes because he is leftwing. He just straight-out clueless, and finds a target, any target, for his attacks. China is a great target, whether the attack is right or not, it doesn't matter to him, as long as his supporters lap it up.


FYI, TSMC has announced plans to build a 5nm fab in Arizona: https://www.cnbc.com/2020/05/15/tsmc-to-build-us-chip-factor...


It's a small fab compared to the ones in Taiwan and by the time it's completed 5nm won't be their latest process.


And there is no guarantee that it will ever be, if the promised subsidies will not materialise.


It probably will materialize. Reading between the lines, the DoD wants close to leading edge fabs on US soil where they can better (in their minds) verify the physical security of what's being manufactured, and is willing to pay absurd amounts of money to make that happen.


Absolutely, Arizona plant is a contingency facility TSMC plants in Taiwan lie within the range of Chinese missiles. It is important for US to not be completely blocked in case of a War. So it is certain it would be built and perhaps in record time looking at the current economic environment.


The yearly budget of the DoD is more than twice TSMC's market cap, and the US and Taiwan are important strategic allies. The DoD has wised up to the strategic need to maintain advanced fabs in the US and is going to pay what it costs to make it happen.


>The US already lost the DRAM industry, the disk industry, the display panel industry, most of the small component industry, and the consumer electronics industry. In ten years, the US won't be able to make electronics in volume.

Welcome to Europe, I guess.


[flagged]


This is a fascinating glimpse into far-right ideology, and I'm really curious about this idea that welfare expense increases in proportion to ethnic heterogeneity. Where does it come from, or rather, how do people try to justify it?


[flagged]


There is nothing in that paper that suggests that is due to ethnic heterogeneity.


The converse of this is that protective trade policies have a cost. If the US government is going to protect its native semiconductor industry, it will be shielding inferior engineering from competition, doing so at the expense of US consumers and taxpayers.


The standard of living keeps going up in China. Why is it that they haven't needed to outsource to countries with cheaper labor to stay price competitive themselves?

Or will that happen, just further down the line?


With electronics, at least, outsourcing to China is no longer done because of "cheaper labor", if that was ever the reason to start with. It's because they do a better job of it.

Tim Cook:

> There's a confusion about China. The popular conception is that companies come to China because of low labor cost. I'm not sure what part of China they go to, but the truth is China stopped being the low-labor-cost country many years ago. And that is not the reason to come to China from a supply point of view. The reason is because of the skill, and the quantity of skill in one location and the type of skill it is.

> The products we do require really advanced tooling, and the precision that you have to have, the tooling and working with the materials that we do are state of the art. And the tooling skill is very deep here. In the U.S., you could have a meeting of tooling engineers and I'm not sure we could fill the room. In China, you could fill multiple football fields.

> The vocational expertise is very very deep here, and I give the education system a lot of credit for continuing to push on that even when others were de-emphasizing vocational. Now I think many countries in the world have woke up and said this is a key thing and we've got to correct that. China called that right from the beginning.

Source: https://www.inc.com/glenn-leibowitz/apple-ceo-tim-cook-this-...


Early on it was due to price, and the main contribution from China was in low cost final assembly work. For the first half dozen or so iPhones about a third to half of the value of the device, including for several models the CPU (Samsung plant in Texas) and Gorilla Glass (Corning in NY), was in components actually manufactured in the USA. The cost of final assembly in China was only a few dollars per device. These variables have changed though as Chinese manufacturing technology and skills have steadily climbed up the value scale.


This is a misrepresentation though.

First - cheap labour is still a huge part of the value. Point blank. I quote from China and you can see literally the 'labour' input on the spreadsheet, and on that basis alone, it's still very competitive vis-a-vis the West.

Second - the reason 'all those skills' exist there, and not here, is because of the wages and conditions. They do all those things at far less than 1/2 price than they do here. That's a big deal. And for the most part, they are effectively commodities.

If Apple et. al. wanted to pay 'American wages' for most of that activity, then major clusters of those skills would develop here.

Very importantly, there is the 'trade asymmetry' in that China has trade barriers, the US largely does not.

It would be 'good' if both economies kind of opened up, but it's like a 'prisoner's dilemma strategy game' in that, it's even better if you get your opponent to 'open up' while you 'remain closed'.

The US has lost dramatically by allowing China on the same WTO terms, and all the other shenanigans. The US should really simply implement 'tit for tat' trade policies such as requiring Chinese companies give up their IP in the US while the US hands to a strategic competitor, not allow them to have >50% ownership of anything, put up indirect barriers, the US Treasure could direct major US banks to subsidise Cisco's customers for export, yada, yada. It seems dramatic, but that's what China is doing.

Finally, though we think of a lot of this as 'high tech' ... a lot of it is not. We don't worry too much about China making tons of 'plastic stuff' because we believe it's 'low value' - but as certain kinds of screens, disk drives etc. become truly commoditised, well, in some ways, it's a lot like assembling plastic toys, and the surpluses go to the buyers, not the makers.

Edit: I'm working on a project where our factory price in quantity is $60 here, will be in $20 in China and overall, it all boils down to labour input costs and overhead through the value chain (i.e. N. American manufacturer pays more for Internet, largely due to labour, those costs get passed on). In the end, it's not the price of the materials, it's the layers of services by humans on top.


> If Apple et. al. wanted to pay 'American wages' for most of that activity, then major clusters of those skills would develop here.

They would, but you're still talking about a 10-20 year gap. First for masses of people to get the adequate education, apprenticeships, get good at their jobs, and then for these people to mentor the second cohort 8-12 years later to get the kind of volume China and Germany have.


Those jobs are not really coming back, there are too many specialised things in China.

And then you have the 'raw labour' - you still need factories with workers making very little, treated like crap.

The bottom 10% of Americans will not do that. For what it's worth.

Now - there is the possibility for 'full automation'. Sony Playstation in made in Japan by this amazingly automated assembly line.

America could feasibly do this if they wanted for some things, but it will take innovation.

The revolution will be: a 'pair of hands'.

When robotics and AI becomes sophisticated enough that you can buy a 'pair of robot hands' for $100K and very easily train them to do 'anything' ... then this will enable automation in rich countries and it could really damage the entire developing world economies.


> And then you have the 'raw labour' - you still need factories with workers making very little, treated like crap.

> The bottom 10% of Americans will not do that. For what it's worth.

OR you need to find other ways to bolster margins without skimping on labor costs at the bottom end. Intelligent cuts to C-Suite pay and golden parachutes could easy pay for thousands of workers at a living wage at the bottom of any company.

The top 10% of Americans will not do that. For what it's worth.


The 10th percentile in America is not earning that-that much.

Nothing will make up for the low wages in China.

America could push up minimum wage some, but it won't change the dynamics of export/import.


The Top 1% alone is making more than enough in earnings. The fact that billionaires exist at all is proof enough that far more money exists that could be spent on living wages at the bottom end than currently ever is. Whether or not the rest of the Top 10% contributes nearly as much to labor costs disparity as the Top 1% is mostly academic at that point, especially if all it does is serve to prop up the Top 1%, which is more than fitting for a terse reply in the pattern of the poster before me.

> Nothing will make up for the low wages in China.

China's wages aren't even that low anymore. Maybe you are thinking of Bangladesh or somewhere that China is now themselves outsourcing "cheap labor" to?

> America could push up minimum wage some, but it won't change the dynamics of export/import.

Correct, it's not a labor cost issue at the end of the day. As I said, companies are more than capable of finding margins that work and still pay labor what it owes them. There's too many externalities companies are willing to, and legally allowed to, ignore that keep them from feeling any pressure or need to actually do so. The point I tried to make is that is not "low wages exist in China" keeping jobs outside of the US. That's a solvable problem with honest dealing with the labor market. Companies just have no interest to solve it.

(The labor market isn't a fair market, for what that is worth, and has almost no liquidity. Corporate structures impact the labor market a lot more than the labor market affects corporate structures, whether or not you agree that the modern American "gentry" are playing too selfish a game with the American labor market and getting away with it.)


"The Top 1% alone is making more than enough in earnings. The fact that billionaires exist at all is proof"

No, this is an ideological statement, not a factual statement.

The 'earnings' (not wealth) of Billionaires would not move the needle that much.

But even the wealth: if Jeff Bezos gave $100B up his wealth, it would be a one-time check to every American for $300. That's it. And just once.

Take up all the billionaires and it wouldn't amount to much for Americans, and it would be shortlived.

"China's wages aren't even that low anymore."

Yes, they absolutely are. To the point wherein there is absolutely no way for America to compete with them on a wage-basis.

"Correct, it's not a labor cost issue at the end of the day."

Yes, it is a labour cost, you're misinterpreting my point. Nothing America can do will make the US competitive with China for things involving labour.

" As I said, companies are more than capable of finding margins that work and still pay labor what it owes them."

This is just fantasy business math. There is no 'finding margins'. There is 'increasing prices' or 'lowering costs'. So either they pay suppliers less, or increase prices in order to pay greater salaries.

And 'owes them' - is again, just made up. How much does Starbucks' 'owe' it's Baristas? What you say they owe? What the market will bear? Minimum wage?

"The labor market isn't a fair market, for what that is worth, and has almost no liquidity"

The term 'liquid' is generally not used for labour markets, you might mean to say 'fungibility'?

The US could afford to pay better minimum wages and probably better for the lower 1/3 - but again, it won't change what's happening vis-a-vis China.


> if Jeff Bezos gave $100B up his wealth, it would be a one-time check to every American for $300. That's it. And just once.

Yet we aren't talking about "every American", we are talking about increasing the labor force, so your math isn't that relevant. Instead, let's say that Amazon required Bezos to put that $100B back into wages for low level employees for ten years: $10B/year works out to as many as 320,512 people employed at a $15/hour full time job minimum wage for those 10 years.

You are correct that I suggested wealth rather than earnings because of course most of the wealthy play shell games with their incomes that launder as much as possible of their income into other things that aren't "earnings" on paper (because one is taxed much higher than most of the "capital gains" taxed sorts of passive incomes). I mentioned golden parachutes, I was not trying to hide that wealth is a bigger component of the debate than just "earnings".

That said, even Bezos' over-the-table non-stock "earnings" are still nearly 3x a $15 minimum wage earner.

All of the above is mathematically true, so easier to consider "facts" just as much as yours. No ideological statements there. I could bring out some actual ideological statements if you wish what those really look like.

> This is just fantasy business math. There is no 'finding margins'. There is 'increasing prices' or 'lowering costs'. So either they pay suppliers less, or increase prices in order to pay greater salaries.

I agree, it is fantasy business math, but it is fantasy business math that is the current language of the MBA and Wall Street folks managing this mess. I did not invent it, and I would use more direct language if it were weren't readily apparent this is the preferred language of those currently propping up the established system.

> And 'owes them' - is again, just made up.

By that logic, all of capitalism is made up. All of human history is made up.

Even setting aside the deeper ethical questions about what an honest society owes to its labor class, I did explicitly define it in that context as capable of meeting the requirements to move those jobs back to the US, which in this case would specifically mean "US minimum wage" as the law of the land people keep claiming makes US noncompetitive with China, and you are just needlessly splitting hairs here.

(To be fair, I'm happy to continue splitting hairs, as I did use "owe" to evoke both the indebtedness meaning as well as the philosophical one as well. But it was well defined in context.)


I feel like “and Germany” gives it away.

Germany, Japan, Taiwan, and South Korea are all developed countries that don’t have “cheap labor” anymore. They found and are holding their niches. Even in terms of “cheap labor”, China’s losing out to places like Bangladesh. A lot of these excuses for why America can’t compete in manufacturing sound like sour grapes to me.


I feel like you are missing the point about Germany a bit.

Vocational and apprenticeships are still a thing there.


I’m not missing the point. That is the point. The US wasn’t undercut by countries with cheaper labor; we just stopped caring about making things because our business, educational, political, and social culture all devalue it.


> Vocational and apprenticeships are still a thing there.

Exactly. But it’s the “American dream” to send their kids to college, even if that’s the completely inappropriate thing to do for that child’s future prospects.


> They would, but you're still talking about a 10-20 year gap.

Sure. That’s ok. It’ll take time to see a return on investment. I don’t see this “but X” as a counterpoint or really an issue at all.


Who exactly is going to make these investments and can afford to wait 20 years to see any return on them? Why would such an investment be made when more competent countries already have the capacities needed today, not twenty years from now?


Well, there's a lot of nuance here so I'll speak quite generally in that the United States could certainly afford to wait 20 years to see a return on investment (we do this with many, many programs and policies even today - NASA for example, food stamps, education) and I'd also argue that a country to the extent possible would tend toward valuing the ability to produce goods in the event that the supply lines of goods being imported is shut off.

Further, you could make investments now that exceed current capacities and capabilities today with an eye to the future.


I want to clarify that I’m 100% in favor of such a policy, in principle. It’s the implementation that’s hard.

Actually the US does a lot of good manufacturing in some fields, like arms. But in other fields, well-intentioned protectionist policies like have backfired. For instance, rather than subsidizing US shipyards, the Jones Act protects them from foreign competition which has led them to significant decline over the past century.

US policy to rebuild or improve US industry has a very mixed history in terms of results. It probably doesn’t help that a lot of these efforts are focused on bailing out the same old manufacturers over and over again to save jobs in battleground states.


Thanks for the post. If (as I assume) China is not the primary market for your product, how much of the final price is determined by shipping from China to intermediate warehousing locations or to customers? i.e. if you were to manufacture in the market you're selling and save the bunker fuel costs (again - assumptions) of transporting in containers from China, how much does that offset increased labor cost?


Shipping small items in quantities surface is cheap.

The issue with China now is 25% Trump Tax, which is huge. For less sophisticated goods, Malaysia, Vietnam, etc. are becoming options.


I find his comments on vocational education particularly telling since he's part of the political elites that have been deriding vocational education and promoting "higher education" that is more indoctrination oriented vs. actually educating.

I wouldn't be surprised if Apple being a major contributor in propping up China for so long before taking action (like their recent diversifications to India and Brazil for manufacturing) is going to be a stain on his legacy long term.


>he's part of the political elites that have been deriding vocational education

What does that mean? Are you talking about opinions he has expressed, or are you just saying he's responsible for the opinions of other people you associate him with?


Chinese government acknowledges outsourcing to countries with cheaper labor as a threat, and works on combating it. For example, their "Made in China 2025" plan is centered around establishing China as the global technological leader in a number of high-tech industries and a source of skilled labor, not as the world's cheapest manufacturing plant it was a just few years ago.

It still remains to be seen how successful they would be, but at least they recognize this situation as a problem in nearest future.


> Chinese government acknowledges outsourcing to countries with cheaper labor as a threat

I wish the US had seen this with equal clarity 30 years ago.

Step 2 is to steadily increase prices once they have a manufacturing monopoly, and impose tariffs on imports.


I wish the US had seen this with equal clarity 30 years ago.

Some people did. Even many politicians did. But they were shouted down as being racist and protectionist, and "globalism" was the future. Anyone who didn't get on board was called old and stupid.


Or, we could have just made it easier for skilled workers from other countries to immigrate to the United States. 30 years ago (or even 5 years ago) I'm sure most of them would have happily immigrated to the US for a higher standard of living and better wages, and both they and the United States would be in a stronger position.

IMO if a foreign adversary spends 20+ years and tens of thousands of dollars (at a minimum) training a highly skilled worker we would be insane not to take that skilled worker for free, if they want to move and work here.


The problem was not lack of skilled labour here, though it is now, but rather the high cost of that skilled labour here, then and now. You can let in a skilled worker from Czechia or Romania but the expected wage here will be much higher than that of a Chinese worker, and they will be unemployed because the American designer of the product will get it made in China.


You're missing the point. If all of China's top skilled labor moves to the US, even if they are cheaper then any manufacturer that needs the highest quality will have no choice but to manufacture in the US.

Limiting immigration of highly skilled workers is a massive national security failure. It's a huge mistake. If a Chinese or Russian top tier engineer / AI researcher / wants to immigrate to the US we should roll out the red carpet because they're a strategic asset.


Brings back memories of those “Born American Buy American” bumper stickers. I was only a kid but I remember them everywhere.


30 years ago it was the left that was criticizing globalism. Apart from a fringe wing of the Republican party [Pat Buchanan, etc.], anybody on the "right" or "center" was doggedly pro free trade until just a few years ago, and those of us on the left who protested the introduction of NAFTA, or opposed the WTO, IMF, or the G7 or (like myself) got teargassed at the Summit of the Americas weren't shouted down and called "racist" but told we were economically illiterate and just you wait, all this free market will make China democratic...

In Canada the only party to consistently oppose NAFTA and the FTA was the NDP, the left wing party.


Interesting how different two people can come to two completely polar conclusions. I guess it depends on your social bubble, on either side.

In my experience, the right was very opposed to NAFTA (while the left was pushing it), in favor of free trade but not at the expense of jobs and manufacturing ability in the USA, and thought all these deals with China were going to enrich a possibly future adversary. By experience, I simply mean social interaction with middle class conservatives/Republicans, not experience in politics or "the party". And, I didn't really start paying attention to what was going on until the early 90s. (edited for a paragraph break)


I think we have a terminology problem. If you think of Clinton and his Democrats as "left" then, sure, they were/are highly pro-neo-liberal globalization. To the rest of the world outside the US, that's nowhere close to left. They're neo-liberals, not socialists.


That's a good point, thank you.


In my experience, the right was very opposed to NAFTA (while the left was pushing it), in favor of free trade but not at the expense of jobs and manufacturing ability in the USA, and thought all these deals with China were going to enrich a possibly future adversary.

I remember it the same way, and I was a journalist at the time and wrote many stories about GOP politicians afraid sounding warning bells about American jobs going to Mexico. Which they did. And then they went from Mexico to China.

I also remember that conservative companies like Walmart were so against NAFTA that they actively promoted Made In America goods in store and in advertising. My how times have changed.


I'm 99% sure that historically speaking, the OP is correct. Wikipedia's introduction to NAFTA sums up my recollection:

> The impetus for a North American free trade zone began with U.S. president Ronald Reagan, who made the idea part of his 1980 presidential campaign. After the signing of the Canada–United States Free Trade Agreement in 1988, the administrations of U.S. president George H. W. Bush, Mexican president Carlos Salinas de Gortari, and Canadian prime minister Brian Mulroney agreed to negotiate what became NAFTA. Each submitted the agreement for ratification in their respective capitals in December 1992, but NAFTA faced significant opposition in both the United States and Canada. All three countries ratified NAFTA in 1993 after the addition of two side agreements, the North American Agreement on Labor Cooperation (NAALC) and the North American Agreement on Environmental Cooperation (NAAEC).

Politically speaking, NAFTA passed in the Clinton administration because he was able to get enough Democrats on board. To this day, free trade doesn't seem to be strongly supported on the left, in part because it's not strongly supported by unions; it's mostly strongly supported by economists, think tanks, and publications that might be characterized more as centrist (the Council on Foreign Relations), center-right (The Economist) or libertarian. I'm not entirely sure why American conservatives in particular moved so strongly against it in the last two decades.


> I'm not entirely sure why American conservatives in particular moved so strongly against it in the last two decades.

There are a couple of factions that makes up modern American conservationism. I think it was mainly big business/ideological libertarians that were strongly for free trade, but I think they were far from the majority, numbers-wise


That flies in the face of the actual policy that was put in place in the Reagan and Bush regimes, and what was advanced or defended by the Republican dominated congresses that existed throughout the 90s.

The 80s and 90s were the era of the FTA and NAFTA, and the rise of the fortunes of the IMF and the WTO. And apart from Clinton in the late 90s this was an era of (for the time) pretty strongly right wing American politics.


In case anyone sees this, thank you for the comments despite my comment being down-voted, I appreciate the insite!


It was neither the "left" (get real) nor the "right" who were pushing for globalization, but multinational corporations. They happen to be on the political right because their goal was to reduce workers' wages and union power, but in fact they have controlled both wings of American politics, the Democrats (fiscally right-wing and socially liberal) and the Republicans (fiscally Right-Wing+ and socially conservative) for a very long time.

: liberal is not considered "Left" outside of the United States


> liberal is not considered "Left" outside of the United States

I see this repeated as a talking point by multiple individuals. I don’t really get what the point of the comment is or what I’m supposed to learn from it or do about it. It feels pretty close to a Scotsman but to what end?


Because it would help if we were closer to talking about the same thing when talking about a subject.

And because Americans are being lied to by their own politicans, and part of that lie is that they have a left wing opposition. They don't, really, apart from the Sanders wing of the Democrats. That Fox News or whatever in the US can with a straight face call Biden "left wing" isn't just a political vocabulary corruption, or a relative shift of frames of reference -- it is an ideological lie, and the purpose of it is to advance an agenda.

And because it would help if Americans looked outside of their own borders at the rest of the world, and gained frame of reference from that.


> Because it would help if we were closer to talking about the same thing when talking about a subject

I'll just quibble with this and say that if we're talking about US politics then we are talking about the same thing when we speak of the "left" in the US, because that's what it is regardless of the state of the rest of the world.

I also am not sure there is much value in arguing about how much more "left" some countries are. The fundamentals aren't really different until you talk about communists (and if that's what you're talking about, I abhor communism and don't want that in my country). If you're talking about universal healthcare for example, you're not really talking about something radically different between "more left countries" like the colloquial Sweden, it's just a matter of degree of difference.


Ok if we're using "universal healthcare" as a minimum definition of "left wing" (for what it's worth I wouldn't accept that, since even Bismarck's right wing authoritarian Prussia had a universal system) that just supports my argument. Universal healthcare is off the table by both parties in the coming federal election. Biden is against it, so is he left wing? (No, sorry, the "public option" tack onto Obamacare doesn't count)

The reason this matters is because as long as Americans accept this false polarity they are missing choice. Not just because they're a two party system, but because their two parties are defined along an ideological axis that misses entire policy choices. And Americans have been mistakenly educated to believe that "socialism" = "government involvement", and that it is an extreme 'left' pole of the political spectrum, and as such they are increasingly ending up with dysfunctional state systems that do not serve the populace.

That and being able to speak to the rest of the world about politics is why clarity of terms here is, I think, important


> Ok if we're using "universal healthcare" as a minimum definition of "left wing" (for what it's worth I wouldn't accept that, since even Bismarck's right wing authoritarian Prussia has a universal system)

I think universal healthcare is a suitable minimum definition of left wing despite that, since a minimum definition is “without this element, a thing cannot be left wing” not “with this element, a thing must be left wing”.

> Universal healthcare is off the table by both parties in the coming federal election.

No, it's not, whether you are referring to the Presidential or Congressional elections, both of which are federal elections in and, unlike in most parliamentary systems, are separate-though-concurrent elections where the candidate platforms even from the same party have no necessary alignment.

> Biden is against it,

No, he's not.

> so is he left wing?

No, he's a center-right neoliberal, just like the rest of the dominant faction of the Democratic Party, and up through at least the early 90s the dominant faction of the Republican Party, too.

A sizable minority of the Democratic Party, though, is (mostly center-)left, though, and they are a not-insignificant factor in Democratic policy stances currently (US major political parties are effectively multifaction coalitions, and just like any coalition the policy preferences of the dominant faction aren't always those of the coalition.)

> (No, sorry, the "public option" tack onto Obamacare doesn't count)

While I don't particularly like it, being part of the more-left-than-Biden part of the Democratic Party, the public option proposed by Biden as recently fleshed out by the Biden-Sanders joint policy group is, in fact, a proposal for universal coverage, not to dissimilar from some other OECD countries, all of which have universal healthcare but far from all of which have public single-payer like Canada's Medicare or public provision like the UK NHS.


Yup! How many conveniently forget this now when it's politically inconvenient.


> I wish the US had seen this with equal clarity 30 years ago.

We're too busy arguing over irrelevant stuff, unfortunately.


That boat had already mostly sailed 30 years ago. We have to go back to the 60s or 70s to change course, and the use of outsourcing created a lot of wealth for the USA, I do think we should have done it differently.


It created a lot of wealth for some in the USA. It destroyed a lot of wealth for others in the USA.


Yes, but that is true of any advancement. In the long run, it just means we should have stopped devoting jobs to low level manufacturing and changed to higher level ones (e.g. software).


>the use of outsourcing created a lot of wealth for the USA

More like it transferred that wealth to the managerial class, while the US worker is flipping burgers for minimum wage now.


They already have tariffs on imports. And not just the ones to spite Trump.

iPhones are twice as expensive in China and they always have been.


It's remain to be seen.

But one important factor why China is ruling this industry right now is not only because the cheap labor, but also the massive supply chain. If you're prototyping some electronic hardware, you're in Huaqiangbei, Shenzhen, you can probably go there and find all the parts you want in one hour.

It's extremely slow and expensive for other places to do the same in the world compared to there. And it still holds true for other parts in China (where many of them with even same quality of labor but cheaper).


They have massive reserves of cheap labor and cheap space that are still far from being exhausted. Some wage inflation has been observed, but we are still decades away from Chinese companies being forced to delocalize to compete.


That's happening. The T-shirt industry moved to Bangladesh.


Yep. Lots of clothing and other manufacturing has moved westward (geographically) from China. A chunk of it seems to be landing in Africa these days, and Africa is gearing up hoping to to be the world's new manufacturing go-to.


I got a Kawasaki motorcycle around 2009 that was built in Thailand. Then again, the steering head came loose after I'd ridden it for a little while...


China is a big country - could it be possible that while some people are indeed seeing an increase in their standard, whilst other areas are still in relative, if not absolute, poverty? We've all read the stories about Foxconn and some of the conditions endured by people working there.


> China is a big country - could it be possible that while some people are indeed seeing an increase in their standard, whilst other areas are still in relative, if not absolute, poverty?

This is absolutely the case, and you don't even necessarily need to look to "other [geographic] areas" to find them.


They do outsource to SE Asia and India. There is also a prison population that is compelled to work for free or barely any pay.


They are starting to outsource to India and Africa. Other international corporations are eyeing those too. Some kind of African union is in the works, which if it helps further stabilize the region politically will make it more attractive. Historically political corruption has kept industry away from Africa.


It’s gonna be weird when there’s no cheap labor left and even the American middle class can’t afford anything that wasn’t 100% machine-made, anymore.


Once there is no cheap labor left we will see an incredible amount of consumer inflation because prices need to rise to accommodate higher wages.


I think this is eventually inevitable. After India (which is only marginally cheaper than China) Africa is the largest reservoir of potential cheap labor. When that's started to experience wage inflation there will be nowhere else to go except automation, and automation is generally oversold and is a lot harder than it seems. It's easy to automate the most repetitive 10-20% of an industrial process, but exponentially harder and/or lower margin as you move down the long tail. Most people aren't aware of just how much of the stuff they buy is still made by hand.


Why do you think wages wouldn't rise as well in that case?


They have. Chinese factories have already lost business in the lower-end of the value chain to places like Vietnam. It's only just started, but the Chinese aren't immune from the basic laws of economics.


Is labour a big proportion of their costs? I was under the impression that modern foundries are highly automated.


Foxconn is moving some manufacturing to India.

https://www.reuters.com/article/us-foxconn-india-apple-exclu...


If we took this to the logical conclusion, nobody in the US would be able to afford faster CPUs because of the high price of domestically manufactured textiles and vegetables harvested with domestic labor.


It will happen, just further down the line.

I watched the same thing happen to Japan way back.


> The converse of this is that protective trade policies have a cost. If the US government is going to protect its native semiconductor industry, it will be shielding inferior engineering from competition, doing so at the expense of US consumers and taxpayers.

Protective trade polices have a costs, but free trade polices have costs too--they're just different ones.


>> most of the small component industry

It's not glamorous, but Texas Instruments still operates a pile of fabs in the US, and is well positioned for the anticipated IoT boom.


Are they though? You can buy a fairly nice old-school TI microcontroller - or you can buy an ARM with enough onboard RAM to run Java, program it like you're writing a phone app, and pay a similar price. Battery consumption will be worse, but are your users going to notice?


>> Battery consumption will be worse, but are your users going to notice?

If it's IOT, yes definitely. Some things you don't want to plug in every day--they have to run 10yrs on battery. TI is pretty good in that space.


TI has a reasonable set of ARM microcontroller offerings in the TM4C lineup. Nothing unique but plenty workable.


I mean Intel stopped making DRAM because they were soundly outcompeted. We lost the DRAM industry before we actually stopped making DRAM.

Disks afaik are still made in the US, although not only in the US.


Who's still making them in the US? I know Seagate and Western Digital's factories are in Thailand and China.


My mistake- Seagate's Minnesota factory doesn't do platters.


I don’t get why exactly this is bad. The listed competition, namely Nvidia, AMD and Qualcomm are all American multinational corps. More competition within the US means more innovation within the US, no?


You're missing the part where Nvidia and AMD have their latest and greatest chips made by TSMC. Arguably the node is just as important as the design supplied.

Intel does both designing and fabbing in the US.


When you say it is owned by the Emirate of Abu Dhabi I would be curious to know how this works in terms of geopolitics.

What happens to this national infrastructure in a time of war? If one country places it’s infrastructure inside another’s then in a state of war the Emirate would certainly lose access to their chip fab. They’d want to see no one else can use it either.

Would something like the Global Foundries site somehow rigged to be permanently disabled, maybe even remotely, so as to not let it fall into the host state’s hands?


You're thinking of it as a strategic resource when it isn't - it's just an investment holding. If war looked likely they'd just sell it on the open market.


GF's 14nm lab in Malta, NY is not going to be upgraded to 7nm also. They cancelled it.


Well TSMC is building a new Fab in Arizona, it would be a $12 billion, 5nm chip production facility. So US Semiconductor industry will still be ahead. And sooner or later others will catch up too.


TSMC will be setting up a 5nm fab in Arizona from this year - production starts next year.


>14nm

>state of the art

You must work at intel


So even if its made in the US but foreign owned it is bad. Even if the technology is superior.


Intel is very bad for the semiconductor industry. The more they are dethrowned the better.


We lost all those industries and it made us wealthier. It’s the law of comparative advantage in action.


Ah yes sort of like when we sold out all our electronic component manufacturing in the UK to Philips in the 1960s-1980s. These eventually got spun off to the Far east.

Add enough local industry to that and there are unspoken pockets of vacuum and two generations of children in poverty and unemployment instead.

Very short sighted.


US sells out assets and spends the money on "stuff" rather than creating new assets. That's the opposite of "wealthier".


GDP numbers disagree. We are still a world leading manufacturer and the world leader in services. We focus on building things that we have the largest comparative advantage. That means software, but not commodity electronics like RAM.

Apple’s iPhone business collects the majority of worldwide smartphone profits. It’s iPad business collects the majority of tablet profits. It’s Macintosh business collects the majority of pc manufacturing profits.

It does this by focusing on design, development, and usability, primarily through software. It keeps costs down by outsourcing assembly to Asia. If it brought that low value work back, it would make it shareholders and America poorer. It would sell fewer devices at higher prices, losing high paying developer jobs and all for a few more minimum wage level jobs.

If US chip makers have competitive advantages they don’t need our help. If they don’t, helping them is diverting talented resources from better industries hat would have better increased our living standards.


You gave me a massage for $100, I polished your nails for $100. GDP increased by $200. Who became wealthier?

Apple sure makes its shareholders wealthier. I guess you can say that about its 90,000 USA employees, and that's about it- there's pretty much no supplier network to talk about.


In your example, you got a massage and I got my nails polished. We both did.

Apple pays app developers over $30B a year. It’s created hundreds of thousands of high paying jobs at third party developers.


There is no money in Semiconductors. US owns the social media industry and that is where the money is, not computers.


TSMC made $11B in profit in 2018, and that's only going up.


Tsmc made about 3 bil in free cash flow. Fb made about 20 billion. Social media is much more profitable than semiconductors. However that kind of thinking will leave your country screwed because if you don't have chips facebook doesn't work.


Social media industry is very fickle, as the rise of TikTok shows, not to mention far easier to replicate by any country which desires so.


Fickleness is the right word.

TikTok is the social media darling not long ago. Now it's in hot water because of privacy issues.


They may not be as much money in Semiconductors as there used to be, but there is a lot of power. If you end up in a war and your enemy can produce semiconductors while all you have is Facebook and Twitter, that's going to be a bad day.


US owns the social media industry and that is where the money is, not computers.

How does one have a social media industry without computers?


friggin' snail mail

do you really think social media(s) appear(s) only with computers?


Snail mail is social contact, not social media. Just like calling your mom on the telephone is not social media.


No. My father friend used to be part of chess-over-mail society. He was participating in several competitions at the same time, and periodically was getting all country as well as regional bulletin on competition states, progress, tidbits etc. I was very fascinated by this whole setup...


And I heard as well that stamp collectors and coin collectors societies worked in the same way, bulletins, regional, all-country, exchanges etc.


All industries whose markets measure in the billions of dollars appear only with computers.


well, that might be true, if we measure social media in money


Chinese people don't use US social media networks.


But the rest of the world does.


Very few countries currently block access to Facebook. (China, Iran, Syria, North Korea.)

And I think the situation is similar for other US-based social media sites such as Twitter.

But, this list could very easily grow. There are almost 200 countries in the world, and about one-third of those are authoritarian regimes, and at least another third could be classified as semi-authoritarian. That's a lot of countries which might be inclined to ban US social media and could do so in the future.

I think China could export its own social media services to authoritarian and semi-authoritarian governments in other parts of the world – a lot of those governments would be very interested in social media offerings with built-in tools to enable censorship and information manipulation.


> social media offerings with built-in tools to enable censorship and information manipulation

ha-ha-ha-ha-ha, Zuckerberg likes to have a word with you


If you haven't noticed, Apple, Microsoft, Firefox, and Brave are waging war against the business model of Facebook and Google. There are going to be billions of devices where tracking users across the web simply won't work any longer.

Chrome is the only major browser left that doesn't block 3rd party cookies, tracking and fingerprinting. We’re probably at peak Facebook right now.

Meanwhile, virtually every device needs semiconductors.


Are there any actually profitable social media companies?


Facebook is very very profitable, they pulled in almost $20B in net income in 2019.


Are you talking in dollars or rubles when you say profit?


If you have not heard about GlobalFoundries, they're the folks that manufacture the semiconductors for AMD, Broadcom and Qualcomm, among others.


Apple is competitive with Intel. Not ARM in a more general way. So I don't really see how Intel could magically produce a vastly better microarchitecture by retaking an ARM license... Especially since the microarchitecture is not completely independent from the ISA, and Intel is a specialist of x86, not ARM. So it would take a huge time to reach the same level.

And that would not fix their process either...

Plus to be competitive on perfs Apple is using some tricks (larger page size => huge impact on L1) that are maybe difficult to apply to Windows, given the Windows ARM ecosystem already exists (even if no very striving for now).

Also MS is attempting a somehow closed ARM hw ecosystem, so that's not a full replacement for x86.

To finish what to compare to is currently likely to be AMD, not Intel.

So I think x86 is here to stay, or at least that's not the Apple switch that is going to initiate a revolution.


Agreed that x86 isn't going anywhere, at least for another decade. The point is that we're about to embark on some very interesting and untested waters for the overall silicon/semiconductor industry.

The collapse of so-called "Wintel" will not be directly from Apple's move to switch to their own silicon, but as a second order effect on the other OEMs such as HP, Dell, Lenovo. Just as they're reaching build parity (e.g. XPS 13, X1 Carbon) with Apple's Macbook offerings, Apple is potentially about to pivot and take a lead in where they can't follow - in CPU performance. Make no mistake that ARM-based Macbooks will easily surpass the performance on the existing Intel-based offerings. What will Dell, HP, and Lenovo do to be competitive?

Microsoft has arguably seen this risk and trying to mitigate with shifting Windows to ARM. But there's a big problem - Qualcomm is not Intel and their chips as-is (e.g. SD845) have both compatibility and performance issues. Are there any alternatives? AMD could bring back their shelved "Seattle" chips, or maybe Intel's Lakefield platform will be fruitful, or even Nvidia will join the fray with a worthy Orin-successor?

Very interesting times ahead indeed...


> So I think x86 is here to stay, or at least that's not the Apple switch that is going to initiate a revolution.

It will, indirectly. When MacBooks offer 1.5x the performance at 2x the battery life, people using Windows or Linux will start to demand similar devices from Dell, Lenovo, HP, Acer, etc;

It makes me the saddest for AMD. They’ve just clawed back a few victories from a decade of loss, and starting within 5 years that will hardly matter.

As for the customer: this will be an amazing time, especially if we will finally get some standardization around ARM SoCs. Apparently every single one of them (at least phones) does their own little boot quirks, instead of their being a unified BIOS or EFI or whatever to guide it all.


> It will, indirectly. When MacBooks offer 1.5x the performance at 2x the battery life, people using Windows or Linux will start to demand similar devices from Dell, Lenovo, HP, Acer, etc;

> It makes me the saddest for AMD. They’ve just clawed back a few victories from a decade of loss, and starting within 5 years that will hardly matter.

I wouldn't be sad for AMD at all. This transition presents them with an amazing opportunity to take more of the client market.

The performance deficit you cite is more of an Intel performance deficit, as current AMD processors also substantially outperform Intel's offerings. I would expect AMD Zen 2/3 and Apple Silicon to perform similarly in terms of performance/watt, as they're both modern processor designs manufactured at TSMC.

Unless Intel can quickly resolve the manufacturing woes that keep them from moving to a competitive semiconductor process, I expect the PC OEMs threatened by Apple to turn more often to AMD to stay competitive.


I'm not too worried about AMD as long as Lisa Su is in charge. They know how to make high performance CPUs and they probably didn't finish their K12 only because x86 seemed more profitable for the time being. I don't see Su missing an important trend when the time comes - when there is sufficient demand for ARM, pioneered by others, and the company has gained enough x86 marketshare (an ongoing, slow process) to be able to afford more parallel R&D.


> It will, indirectly. When MacBooks offer 1.5x the performance at 2x the battery life, people using Windows or Linux will start to demand similar devices from Dell, Lenovo, HP, Acer, etc;

Well, people can demand that, but if only Apple knows how to design the chips, and given they will not sell them to competitors...


Users don't want chips they want systems and the way people will "demand" them will be by buying Apple. Dell and Lenovo will have to come up with something or cede market share.


I know Apple probably sells the most computers of any single company but in terms of the total market share they are under 10%? (laptop market). Part of that is cost. Apple's cheapest laptop is $999. I can get 3 chrome books or even 3 windows laptops for that. Sure the $999 Macbook Air is a much better machine but the person buying the $300 laptop, or even the $600 laptop doesn't care.

It's like saying BMW or Mercedes is a better car than a Kia. I'm sure it is but neither my sister nor my 2 adult nephews can afford the Mercedes and neither can they afford the MBA.

Just like Kia will still get them to work and to the grocery store the cheaper laptops will still let them facebook/twitter/ms-office/slack/google/maps just fine.

Going on Amazon an typing "laptop" 6 of the first 7 hits are under $500 so less than 1/2 the cost of the cheapest Mac. The 1 that is over is $579 so nearly 1/2

I'm typing this on a $3600 MBP and I have a $2700 MBA as well so it's not like I'm not an Apple fan but I see what my family buys they'll opt for the perfectly adequate $550 computer over the probably superior $999 one.


Where I come from, people won't certainly be rushing to buy Apple with 300 euros as minimum wage and 1000 euros on average for regular IT job.


Sure, but neither are they now so it makes no difference to Apple.


Which makes the whole point of the article meaningless.


That's the direct pressure, not the indirect pressure.

And yeah it'll exist but if they decide not to make the OS replaceable then there's going to be a severe dampening effect on people switching to Apple hardware. So we'll see on that front.


Interesting times ahead, that's for sure.

In my mind the small wildcard is going to be how effectively Apple can scale up CPU performance as their thermal budgets are increased. But the big wildcard is going to be how their in-house GPU tech scales up. It's going to be strange times ahead if Apple can bring the fight to AMD and Nvidia's mid-tier discrete products.


Where is the pressure for Apple to beat the GPU performance of AMD or Nvidia going to come from though? Game consoles are stalwartly back on x86 for at least another generation (Xbox Series X, PS5), so AAA developers will have even less pressure for ARM ports and especially macOS-specific ARM ports. Apple may be hoping for more "bottom up" ports from iOS/Apple Arcade, and ARM macOS will make that easier, but games ported up from mobile don't seem particularly like a source for parity with AMD/Nvidia, much less a source for innovative competition "fight".


To be clear, I was referring to AMD/Nvidia mid-tier. And I don’t expect Apple to come close to that any time soon. But who knows—Apple have already exceeded all expectations on their in-house GPU tech.

While Apple doesn’t need to be chasing tier 1 AAA game performance, they do need graphics horsepower to handle two or three 6K displays. They also need to be a compelling platform for occasional gamers. And they need performance to power their push into AR.


How many other companies are competitive with Intel using ARM based designs specialized for their own needs? I know Amazon is doing some amazing things around ARM for their own data centers - both selling access to ARM based servers and using custom chips for networking hardware.

Isn’t Google also designing custom ARM chips for machine learning and their data centers?


AFAIK, Amazon's ARM stuff is all based on ARM-designed cores (the Neoverse N1, presumably Cortex-X1 in future). Yes, all the integration is done by Amazon but the underlying ISA implementation is the same.


Didn’t they acquire Annapurna recently and are making their own thing?


Still based on ARM cores, far as I can tell? Plenty more on chip than just the cores, though.


I've always wondered how independent µarchs are from ISAs. I mean people do say that Ryzen shares a lot of elements from AMD's ARM K12 project.


ARM has a more relaxed memory model that allows you to skip some uarch components. For example, the x86 instruction cache needs to watch for writes to instructions it's executing, while ARM derived CPUs can just ignore those writes.

And ARMs fixed length instructions allow apple to just duplication their instruction decoder logic until they are decoding (currently) 7 instructions per cycle.

With x86, you have to work out how long each instruction is to know where the next instruction starts. Intel and AMD actually try to decode an instruction starting at every single byte (over 16 bytes) letting the earlier instructions cancel out any incorrect instructions.

With this scheme, Intel can only decode 5 instructions per cycle, while AMD are currently doing 4. And they both require certain complex instructions to always go in the first decoder.

To overcome this bottleneck, Intel and AMD both cache decoded uops in an L0 cache to speed up tight loops. Intel can do 6 uops per cycle, AMD can max out at 8 uops.

Apple can skip all this complexity. Their decoders operate at full speed and there is no need to cache uops.

Beyond these two areas, the rest of the CPU core is reasonably ISA agnostic.

The scheduling and execution units can stay the same, but the optimal balance of execution units might change based on the ISA.

That balance also might change based on the type of code the CPU is expected to execute, and the compiler.


> And ARMs fixed length instructions allow apple to just duplication their instruction decoder logic until they are decoding (currently) 7 instructions per cycle.

Only the A32 and A64 instruction sets of ARM have fixed 32-bit-length instructions. T32 instructions are either 16 or 32 bits long.


True. But even decoding a variable length instruction set with just two lengths is far easier than decoding x86.

With T32 you just look at bits 15-13. If they are 111, it's a 32bit instruction. If they are anything else, it's a 16 bit instruction.

x86 is somewhat insane. Instructions can be any length from 1 to 15 bytes, and thanks to prefix bytes, the bits that specify the length can be found at any byte offset and might be spread over multiple bytes and interact with each other.

Even for 15 byte instructions, the 15th byte can tell the difference between "this is a valid 15 byte instruction" or "this instruction is actually 16 bytes long and therefore invalid"


> while ARM derived CPUs can just ignore those writes

Could you elaborate why?


On x86, an instruction is allowed to overwrite the memory for the very next instruction and expect the CPU to pick up the modified instruction and execute it.

On modern Superscalar designs, this requires the instruction cache to actively watch for any writes to cachelines it holds and invalidate them. If that cacheline is currently being executed, the core needs to go further.

The scheduler needs to decide if the instruction modifing the cacheline is before or after that position in the instruction stream. If it's after, the whole pipeline needs to be flushed and the instruction fetcher reset to the modified instruction and execution restarted.

Not a fast operation, but it needs to be supported due to the spec (and legacy code)

With ARM, the spec says the CPU is allowed to do whatever it feels like in this sitatuion. The cheapest option is to just ignore the write and execute whatever old code is currently in the instruction cache.

But to allow self modify code (and loading of libraries) ARM provides a special "invalidate the instruction cache" instruction which explictly tells the CPU something funky has happened and it need to flush data out of the instruction cache before continuing.


> ARM has a more relaxed memory model that allows you to skip some uarch components.

Note that ARM still generally supports ldar to match Intel’s semantics.


True. It becomes more of a power consumption advantage, because instructions that use those stricter memory semantics are explicitly marked.

Also, even with those instructions there are still things ARM can't currently do. X86 can and will do misaligned atomic updates (as long as it doesn't cross a cacheline). ARM can't.


Well you have the multiprocessor memory model for example. Technically, you could use a stricter one (from x86) on arm, but that would maybe prevent some non-atomic efficiency gains, and if it's for a platform with competitor chips that do not do that, programs also won't be able to profit from your stricter model (and maybe Arm Holding won't even let you document it); maybe you will be faster on some atomic instructions though (because you have to be fast for even your non atomic ones anyway), but I doubt that would be beneficial in the majority of workloads.

Clearly, there can be shared elements, but, if you want something optimized, not everything.


Why would a larger page size be difficult to use with window? Also why would that have a huge impact on the L1 cache? Are you talking about cache lines being larger? Intel cpus actually always pull two cache lines from memory, so 128 bytes, even though a cache line is 64 bytes.


I'm not sure about the difficulty level of enlarging the pages in Windows. But I'm afraid there are 3rd party programs hardcoding it (and existing ARM hw they run on, etc.) But thinking more about it, IIRC Windows has a 64kB granularity at some level in mapping APIs, so maybe it's not as difficult as I initially thought. Not sure.

Why it has an impact on L1? Because you don't want to perform address translation before beginning your L1 lookup (else the perfs would be shit). So you index your L1 cache by the LSb, which are the same in virtual and physical. So a bigger page size means more bits to index, and a bigger L1 without having to give it more ways. And if you can avoid more ways, its better because you have to compare the tags in all the ways, then mux the data out of the good one, so having tons of ways draw some power. That's why L1D is typically 32k on computers with 4kB pages: that gives 8 ways, which is OK. Intel went with 48k and 12 ways on Sunny Cove, but that's starting to be high. A12 is 128k L1D, with 16kB pages, if I'm not mistaken that's conveniently 8 ways.


Larger pages also relieve TLB pressure somewhat and reduce page table size, although large memory areas are supposed to just use giant pages anyway.

It's worth pointing out that L1 is universally split into L1I and L1D of usually the same size, so for x86 mainstream L1 is specified as 64 KB, meaning 32K+32K for instructions and data, respectively.


> But I'm afraid there are 3rd party programs hardcoding it (and existing ARM hw they run on, etc.)

Of course just about every JIT runtime does this.


Is the only thing that can ruin Apple’s plans are the chips not being “general purpose enough” or something and not running well?


Apple has been working on this transition for years. Just like they were running MacOS X on Intel long before they announced the Intel transition, you know there have been ARM-based Macs for a few years now.

It's early, but these benchmarks from the DTK looks really promising: https://gizmodo.com/a-wild-apple-arm-benchmark-appears-18442...


Benchmarks while emulating x86_64!


It's true that Intel (note, I'm a former employee) has had a mixed track record in creating new businesses outside of desktop and server chips. However, I'd like to point out a few things.

1. When Intel released HT as a way to recover 15% idleness in CPUs, the Linux scheduler took a while before it realized it should schedule work on all full cores before assigning something to a hyperthread.

2. Intel has been adding specialized SIMD vector operations, and wide vector operations, yet popular compilers like GCC were really slow to take advantage of them. One or two blog posts a month will hit the HN front page about inlining assembly or using specialized math APIs like the Intel MKL to extract performance from the cores you're running on.

Apple can add neat blocks like audio, low power video playback, always-on processor, and neural engine because they also release the software APIs for it. Maybe that's Intel's sin. They never realized how hands-off the white box industry needed to be to stay alive and relied too much on OSS developers to fuel general-purpose Linux performance outside of large HPC computers. Intel made add-on things, people didn't use them, and now Intel is perceived as the company that can't innovate.


> the Linux scheduler took a while before it realized it should schedule work on all full cores before assigning something to a hyperthread.

According to statistics maintained by LWN [1], Intel was the number #1 employer of Linux kernel developers in the latest kernel, and has been for a while. If you go back into previous years, Intel was not always as high up, but even back in 2007 (when LWN started tracking this) they were in the top 5-10 (depending on how you count it.)

Intel released HT way back in 2002. I don't know how active Intel was in Linux kernel development back then. But what Intel was or wasn't doing 18 years ago probably isn't very relevant to the question of Intel's current performance.

[1] https://kernelnewbies.org/DevelopmentStatistics


I’ll borrow the old BSD/linux saying about intel is a group cpu makers writing an OS Where Apple is and OS maker that designed a cpu


This is very interesting. I know Intel has their own compilers, but wouldn't seconding a couple engineers to the GCC/LLVM teams yield exactly the benefit you're describing here? It's a trivial cost to boost their own add-ons.


Intel's handling of ICC has always puzzled me. I'd bet Intel could live without any of the money it made them. Yet, they never open-sourced it so that GCC/LLVM could catch up faster with any tricks that gave ICC a performance edge. Knowing that Intel chips were fully supported by all major compilers would probably translate to more chip sale profits than the profits of selling ICC as a tool. The least they could have done was to release a stripped down version (e.g. without advanced profiling tools) as a no-nonsense free download.


Right. Because AMD had the same instructions (but presumably different pipeline timings?), they kept their secrets behind a bit of a wall which added friction for people who would have liked extra performance but not been willing to jump through the hoops required to get it.

So now Intel chips are limiting battery life because every application is executing unoptimized garbage that runs longer and therefore uses more energy. Meanwhile, Apple Xcode will produce optimized binaries for Apple Silicon and the press will go wild.


That would also have made it easy for a fork to remove the code which disabled optimizations on AMD processors. I suspect that their managers viewed this as an edge since they could post high benchmark scores even if most common software didn’t use icc.


It's actually easy to work around the AMD deoptimization by just pretending to be an Intel CPU when the check is made - no need to fork the code.

On the bigger point, it ends up hurting them because companies often use gcc to benchmark across systems, so the Intel performance comes out lower than it might be if Intel really invested heavily in gcc rather than icc.


Yes, you can bypass the check by editing the binary but that’s a non-trivial extra step and, more importantly, it means you can’t use the official binaries for anything. Far from insurmountable but it’s the kind of things sales guys are going to make sure every large customer‘s senior management hears.


Some people in the FOSS community don’t like proprietary vendor lock in and things like Intel MKL.

Standardise the features you add, work with AMD and I’m sure you’ll get more support.


It also makes building high-performance open source software challenging. Intel or AMD? If Intel, is the MKL library installed? The support matrix grows a bit out of hand.


This is especially important with vector instructions. I had to install Tensorflow 2 years ago and there were four packages available in the repositories:

- nVidia GPU with CUDA

- CPU with SSEv2 (or whatever the second version of vector was called) optimizations

- CPU with SSE optimizations

- CPU without vector optimizations


You mean AVX2? SSE was from the 90s


Yes


Although I'm not familiar with the specifics of (1) and (2), the characterization that Intel "took a while" or "were really slow" to enable features might be a bit unfair. (I am also a former employee)

Keep in mind that Intel's ability to properly light up new silicon features on your PC often relies upon your willingness to update your software. For instance, even if the SMT scheduler fix has been written and upstreamed, the next Ubuntu LTS to integrate the change may be 18 months away. The end user will believe that Intel is taking a long time to develop the fix.


Is Intel engaged in these communities?

My naive POV - It would seem worthwhile for Intel to participate in GCC/Linux to accelerate adoption of these innovations... PRs welcome, etc. (though I realize it’s often more complicated than that)


Gassée is focusing on the wrong things. "Thermal envelope"? Bah.

In the block diagram of the SoC for the new Macs, Apple very clearly signaled what they are doing. They are working towards Steve Jobs's long-term vision.

Onboard "advanced audio processing", "enhanced camera", "neural engine", "AI processing": this is about making Macs that support development of the next generation of software for iPhones.

Mobile phones right now are stuck. Apple wants to make them more useful to their users, and the way to to that is to make them context aware, continuously listening and occasionally looking around. so the phone knows what's going on.

Context-aware phones is what this is about. "Siri, write minutes of the meeting we just had, and email them to everyone."

Users will never accept a phone that's continuously uploading everything in their lives to Apple's servers (or anyone else's), so all of this capability has to be on board the device.

Intel is incapable of making a SoC that will support this, and no-one else has the capacity, so Apple has to do it themselves. (Also, Apple wants to keep the IP to themselves.)


> this is about making Macs that support development of the next generation of software for iPhones.

This doesn't sound right to me. You seem to be looking for ulterior motives where none need exist. Isn't it a sufficient explanation that Apple believes they can build a superior product at a lower cost? Why does it need to be some conspiracy about re-framing the Mac as an iOS devkit?

For this conspiracy to hold, you'd first have to believe that Apple builds Macs with developers in mind. Nobody with a passing familiarity with recent Mac history could believe that. It's been a long time since Apple has focused any Mac product on the needs of developers. Why would they start now?

Besides, so what if future Macs are more like the iPhone? While it might be of occasional benefit to iOS developers by increasing feature parity of their iPhone Simulator, that's an extraordinarily marginal reason to overhaul a critical nexus of Apple's product stack.


Intel is making silicon for an entire industry. The "white box" industry flourished by assembling a box of parts and shipped them out the door with no other value added. Intel hyperthreading is ancient at this point, but for years Linux scheduler would schedule work on hyper threads before real cores. Apple is aggressive on software which requires them to be aggressive on hardware.


I don’t really buy this. Apple, in general, seems to be shockingly bad at software. They have a design aesthetic, they’re pretty good at ideas, and they are able to execute. They accumulate plenty of technical debt, though, and they are bad at maintaining their software portfolio.

Intel throws a lot of money at software, much of it open source, and this has played out fairly well for them. AMD is far, far behind in this regard.

I fully expect a cutting edge implementation of macOS on ARM, and I fully expect it to languish post release.


I think “shockingly bad” is a bit dramatic. OpenCL, Metal, and Grand Central Dispatch are all technologies Apple created to help developers take advantage of the hardware Apple ships.

I think a fair criticism would be the noise created by developers about the quantity and average age of bugs in their APIs.

In 2015, 10% of Intel’s engineers were software. I don’t know how that’s changed in recent years.


Apple also played an important role in the development of LLVM and WebKit.


The previous head of Windows would disagree with you:

https://medium.learningbyshipping.com/apples-relentless-stra...


> "Users will never accept a phone that's continuously uploading everything in their lives to Apple's servers (or anyone else's), so all of this capability has to be on board the device."

Oh man, I genuinely want you to be right in that prediction. But I wouldn't bet on it.

IMO we already see many (most?) users and even tech industry pros (who should "know better"?) showing comfort in uploading/streaming/routing large swaths of their lives to many companies. Companies arguably less "good" than Apple. (And many of these are Apple users who also have no problem hosting their lives on Google and Zoom). Because convenience, shiny things, and lifestyle marketing. And "free".

I would love a future where all personal data and heavy computing was done on local hardware but I bet most people, given the choice, would choose a cheaper option that puts streams and storage and computing in someone else's hands. Because "free" and/or "cheap". i.e. the product is you


If they wanted those extra things available on x86 macs, they could just stuff them in an accelerator chip.


Yeah, and they're already shipping custom chips (like the T2) in their intel laptops.


That’s an incredibly costly and inefficient approach. You used to be able to get x86 add on boards for Macs back in the late 80s and early 90s, but they cost a fortune and were incredibly clunky.


That's not a comparable thing. Putting specialized functions in their own chips is completely normal, and only a problem if they need really deep integration with the host CPU, or efficiency or size are extremely important (iPhone: sure. laptop, not so much), so I don't think those are a primary concern for the move to ARM laptops.


How does that help with making software for iPhones?

My guess is that they want the development platform to be a beefed up version of the target platform.


Intel has bought Movidius. Movidius Myriad could provide the right basis for this kind of "context awareness". It has been tailored for vision tasks originally but I don't see any reason why other perception tasks wouldn't benefit from this architecture.


Slightly off-topic: why SoCs? Cramming everything into a single die seems to be a poor design decision since your die size is likely to be limited by yield considerations. Curious why Apple doesn't go for ASICs inside a SiP or tightly coupled on an interposer/PCB.


Why Intel did not buy ARM is something that puzzles me...

In 2016 ARM sold to SoftBank for $31B.

In 2016 Intel had a market cap of ~$170B and about $17B cash on hand. I'm no M&A analyst but surely given their fundamentals at the time Intel could have bought ARM without financial difficulty, no?

If even Facebook can look two steps ahead and pay an eye watering $19B for WhatsApp couldn't someone in the C-level at Intel could have seen that owning ARM would have been a jolly good strategic purchase - or am I missing something?


The conversation was a non-starter due to antitrust laws. Intel has wanted to purchase both ARM and NVIDIA for at least a decade.

This is according to the internal rumor mill at Intel Labs circa 2010, so take it with a grain of salt.


I can see why buying ARM would be a problem for antitrust laws but I don't see why it would be the case for NVIDIA. It seems that ATI/AMD were a stronger competitor to NVIDIA on the GPU side that they were to Intel on the CPU one.


Intel had (or still has?) the biggest share on the graphics market as every intel cpu had (has?) an integrated gpu. So by definition of market share the largest supplier wanting to buy the second one is an antitrust problem. It does not matter that intels graphics cores are much worse.


Just guessing:

1/ Intel could be considered a monopoly

2/ Intel think they can beat ARM in the long term


Why should Intel enter the business of selling chips as fast as they can with a margin of 30 cents per chip when they are already in the business of selling chips as fast as they can with a margin of 30 dollars per chip?

At no point does Intel selling ARM make financial sense until Intel is on the verge of death.


But ARM doesn't sell chips.


A big problem with the thesis of this story: no one else has ARM SoCs as fast as Apple's. The highest end Android phones available get smoked on performance by the 3-year-old iPhone 8:

https://www.anandtech.com/show/15603/the-samsung-galaxy-s20-...

The tablet situation doesn't look much different.

Who's going to design equally capable SoCs for Windows portables? Doesn't look like it's going to be Qualcom or Samsung.


Although it's not clearly on their radar, this is something nVidia could pull off.

They've got Tegra and experience targeting 5nm, or at least, they appear to be growing it. It could be reasonable to see nVidia willing to jump to 5nm for Orin or whatever is after it.

Yeah I don't know how realistic that is, but, I struggle to guess at any other company that could stand up and go toe-to-toe with Apple at least in some target markets on ARM... if they wanted.


Not clear why they'd be interested, since their business model seems aimed at commodifying CPUs and putting the value proposition in the GPU.


Does single thread CPU matter? It's been facing marginal returns for years. Most of the growth in computing power expansion is in chiplets, GPUs, and TPUs.

The real question is mobile GPUs, and whether NVidia or AMD want to step into that market. On the desktop, Ampere and RDNA2 absolutely crush Apple's offerings, Apple is multiple generations behind. However this is at the expense of not really carrying too much about TDP. The real question is can NV or AMD squeeze a cut down version of their architectures into a SoC thats power competitive.

That is, I might be willing to buy an AMD "Fusion" that has a Ryzen and RDNA2 architecture in it, even if it is a severely cut down version, over a MacBook if the battery life is reasonable.


AMD has a deal with Samsung to integrate their GPU technology with the Exynos line.


What’s preventing Apple silicon Mac from using gpu from AMD?


The tablet situation is if anything worse because demand is terrible :/


Interestingly it's strong enough to support a fairly broad range of products from Apple.


Not so long ago, we had this at HN:

https://news.ycombinator.com/item?id=5587283 ("The rise and fall of AMD...")

With comments like these:

https://news.ycombinator.com/item?id=5588069 ("I could go on and on about all the reasons I believe AMD went down the tubes... but I think a lot of it reduces to the disfunctional corporate culture alluded to in this piece.")

Guess what happened with AMD? So I suggest waiting with the obituaries, at least until Apple actually has a working system a normal person can buy, and until Intel/AMD shows us their response.

The entire idea behind the obituaries is that Apple's whatever will be so performant that ordinary PCs will switch on mass. This misses a few things:

A) Where will the PCs get their ARM chips from? Not Apple, and non-Apple ARM is still obviously inferior to x86. Intel/AMD will rather keep their x86 monopoly, so it's in their interests to keep producing x86 as long as it's competitive even just against non-Apple ARMs.

B) The typical Desktop/Laptop user doesn't care that much about performance anymore - not as much as they care about running their favourite apps. x86 has a huge lead here.

All it takes is for Intel/AMD to stay/become slightly competitive with whatever Apple put out, and Apple will stay in their silo - which actually wouldn't bother Apple Inc. at all; They always ran their own ecosystem.

C) Really, CISC/RISC doesn't matter that much. Apple's advantage (if there's one) would be a result of other architectural decisions, we should ask ourselves if there's anything preventing Intel/AMD from copying those decisions.


The entire idea behind the obituaries is that Apple's whatever will be so performant that ordinary PCs will switch on mass

It doesn't need to and is unlikely to happen en masse. Apple creaming off the most valuable consumers is a problem for everyone else. You can see that in the smartphone market where Apple dominates the profits.


PC makers switching to ARM was the thesis of the linked post.

Regarding your thesis:

'Creaming off most valuable consumers' somehow didn't stop Huawei, Samsung, etc. from being huge profitable enterprises. Partly because it's not quite true (Apple has a position in some countries, in others it might as well not exist), partly because there's more to things than a benchmark comparison (Apple survived for a long time when Mac processors were inferior), and partly because we'll see a response from Intel and AMD. It's going to be an interesting few years in this sector.


Is it really the x86 architecture that's driving costs up and not core micro architecture and process? From where I sit, what we're actually seeing is a 7nm (and impending 5mn) TSMC process destroy the old 14nm process Intel has been hung up on for too long. I mean that's a square increase in density right there. If Intel could fab a 7nm chip right now, I'm pretty sure it would compete in the power department with any arm chip on the market. Just look at the latest Ryzen 4800U. It's a 10w 7nm x86_64 chip that outperforms the a12z. I don't want to downplay the proliferation of ARM we've seen lately due to the smartphone market, and having another architecture is good thing. But I'm not convinced the author isn't scapegoating x86.

https://gizmodo.com/so-just-how-powerful-are-apples-new-lapt...


I think that Gizmodo article is getting it very wrong. They keep talking about "the A12Z intended for the Mac", but AFAIK there is no "A12Z intended for the Mac". When Apple switched to Intel their TDK used a Pentium 4, yet no Mac ever shipped with a Pentium 4 (besides the TDK itself, if you consider it a Mac).

The ARM TDK is basically an iPad Pro running macOS, without a screen and with extra ram. A12Z themselves are just re-binned A12X (with just one extra GPU core activated), processors that are nearly two years old.

No way Apple is shipping that in their first MacBook ARM...


>No way Apple is shipping that in their first MacBook ARM...

No doubt. However, I don't think the article is so much "getting it very wrong" as it is working with what they've got. We've seen what a 10-18 watt soc from apple can do in the iPad. And it's pretty cool. But the point is it's not truth defining jaw dropping revolutionary. We can look at what AMD is selling right now. It's also pretty cool and starting to turn some eyeballs. It's very unlikely Apple is going to release something that outright spanks the status quo. So my expectations are that whatever Apple releases will be competitive with what we currently see from AMD and, maybe, with what we'll see later this year. That's all. I'm skeptical that the shift to ARM is the crux of the matter, is all.


I want those new low-TDP Ryzens either as AM4 socketed or mounted in mini-ITX motherboards, but only the higher TDP desktop models seem to be available.


Lenovo just released the Thinkpad X13 (an AMD version of the X1) for the first time. It has a 10/25w TDP 44/46/4750U. I think the focus right now for the lower TDP chips is laptop computing because that's where you'll see the most improvement. I've been following the market for the past 9 or so months because I need a new machine and both Intel and AMD have taken that strategy. Look at the ASUS G14. A laptop that can nab 10h of battery life on a 76 watt x h battery because its workstation (H, not U) series processor is only pulling 10W. And it benches close to a desktop 10th gen i7 at full power. It's getting exciting.


AMD really are trying to get in the enterprise laptop segment, simply because that's where Intel has had a near-monopoly since the Pentium M.

So I know why I can't get a good mini-ITX board, but I still really want one :-)

As is, my best option for the living room NAS/HTPC turned out to be a J4105 board from ASRock. 10w TDP and apparently it can do 4 simultaneous GPU-accelerated 4K transcodes, which is pretty neat. Perhaps I'll upgrade to an AMD board at some point.


Longer term this will also likely accelerate servers running on ARM. Writing software on ARM laptops that are deployed to production servers running on x86 servers will start to cause a host of new challenges. The switch to running ARM in production will have many advantages for developers and will likely be very attractive to cloud providers (AWS, Azure) as the costs of electricity for these servers may be significantly less.


If there was an actual energy efficiency advantage -- i.e. less power consumed for the same amount of work -- Google would already be 100% on ARM. Why do you think they would leave that on the table?

I realize the situation changes every time a new CPU comes out, but I have never personally seen a real workload where ARM won on energy efficiency and had reasonable performance. Tests like [1] and [2] showing x86 having an orders-of-magnitude lead on database performance vs. AWS Gravitron2 should give you serious pause.

1: https://openbenchmarking.org/embed.php?i=2005220-NI-GRAVITON...

2: https://openbenchmarking.org/embed.php?i=2005220-NI-GRAVITON...

If you're wondering why ARM needs to have both competitive performance and energy efficiency, see Urs Holzle's comments on wimpy vs. brawny[3].

3: https://static.googleusercontent.com/media/research.google.c...


(disclaimer: I work at AWS)

Hello,

There definitely is that power use advantage.

Those tests are pathological cases because of bugs, and aren't representative of the performance of the hardware. (for that one I suspect that ARMv8.1-A atomics weren't used in compiler options...)


Well, the listed GCC options do not specify microarchitecture for either ARM or x86. So it's probably a k8-compatible binary on the top line, too. I'm also not sure why atomics would be important in a single-client mysql benchmark. Either way, the risk that your toolchain doesn't automatically do the right thing is one of the costs that keeps people from jumping architectures.


GCC 10 onwards are finally able to get and use Arm atomics at runtime on arm64.

> I'm also not sure why atomics would be important in a single-client mysql benchmark

They are because exclusive load & store operations are quite expensive on Arm, even if you actually just run an ST workload.


Let me ask you then, since you seem plugged in: if the win is there, why can't I buy RDS on Gravitron?


I presume this has to do with support for ARM with the database software.

PostgreSQL appears to have had ARM support for sometime. Mysql only added it with 8.0. As far as the other database options are concerned, ARM isn't even supported yet.


Ok, so why would I use an architecture that barely has any software support?


>If there was an actual energy efficiency advantage -- i.e. less power consumed for the same amount of work -- Google would already be 100% on ARM.

Energy Efficiency Wins on Server Workload is a very recent thing. To the point it is new as it basically start with N1 / AWS Graviton2. And even that is not ALL workload.

Not all software are optimised on ARM, compared to decades of optimisation on x86. Not to mention Compiler options and EPYC was running on bare Metal with NVMe SSD compared to Graviton 2 running on Amazon EBS.

And most importantly, you would be paying for 64 Thread on Amazon for the price of 64 Core Graviton 2. i.e It should be tested against a 32 Core EPYC 2 with SMT.

I still doubt G2 would win in the fair test, but it would be close, and it would be cheaper. And that is the point.


But up until recently Intel have had a manufacturing process advantage that has made it difficult / impossible for the likes of Google to source competitive high performance cores. That advantage has slipped away.

EPYC is an innovative product and comparing it with Graviton clearly shows that AMD has done a great job and that Graviton is not quite fully competitive yet (but not to the extent that those benchmarks seem to indicate, as others have commented).

I think its possible to overstate the energy efficiency gains from using ARM but all the indications are that ARM cores with fully competitive performance will emerge and that they will have some efficiency advantage - after all why would AWS be investing in ARM if not?


I found this to be an excellent explanation for what is going on in these x86 CPUs that makes them so powerful:

https://youtu.be/Nb2tebYAaOA (Starting from ~6:30)


Add timestamps to your link instead of messing around with parenthesis. https://youtu.be/Nb2tebYAaOA?t=390

Right click -> "Copy video URL at current time"


The interviewer guy is weird


> If there was an actual energy efficiency advantage -- i.e. less power consumed for the same amount of work -- Google would already be 100% on ARM.

I don't think they would do that.

It is not wise to venture into microelectronics business, let alone CPUs.


>Google would already be 100% on ARM

Google is laggard when it comes to server tech. Amazon and others have leapfrogged them.


That's pretty silly. What makes you say this? There are large groups at Google responsible for buying every computer there is and evaluating the TCO thereof. With operating expenses exceeding two billion dollars per week, they have a larger incentive to optimize their first-party power efficiency than anyone else in the business. I'm fairly certain their first-party workloads (search etc) are the largest workloads in the world.


> With operating expenses exceeding two billion dollars per week

"Alphabet operating expenses for the twelve months ending March 31, 2020 were $131.077B, a 13.47% increase year-over-year."

Operating expenses is everything, including salaries, etc. Google isn't dealing with $2B/wk in power/server purchases, though they're still huge, no doubt.


Neither Amazon nor Apple are buying off the shelf ARM chips. They are both designing processors to meet their needs. Amazon bought Annapurna labs and Apple bought a slew of companies specializing in processor design.


I don't think they would switch even if they could in the near future.

The poster above says that Amazons "leapfrogs." They question is "leapfrogs where?" The fact that ARM cores cost 100+ times less than Intel, and are n times more power efficient was well known for the whole eternity.

What people don't get is that you get the whole platform on x86, and ARM is a very, very DIY thing, even if you are a multinational with lots of money on RnD.


> I don't think they would switch even if they could in the near future.

They've already brandished their ability to port their whole product from x86 to POWER[1], and deploy POWER at scale if they need to[2]. My personal interpretation of these announcements is they are made with the purpose of keeping their Intel sales representatives in order, but the fact that you don't also see them or anyone else brandishing their AArch64 port should tell you something.

1: https://www.cnet.com/news/google-acquires-a-taste-for-ibms-p...

2: https://www.forbes.com/sites/patrickmoorhead/2018/03/19/head...


I'd say less bluntly that Google is not as innovative as it once was. Old large companies ossify, and Google is not an exception. It failed on the social network (facebook), it failed on the instant messaging (whatsapp), it failed on the picture meme (snapchat), it failed on the video meme (tiktok), it failed on videoconference (zoom) ... you may see some kind of pattern there.

If asked whether google will succeed at something new (say, Fuschia), given those priors, my response will be: "no. it would be a surprising first in many many years. the company is on decline"

What we're missing is the connection between the services of the large companies: Google, Amazon, Microsoft all have an offering made of devices (hardware), websites (software) and cloud services. There seems to be a synergy, where you benefit from doing all 3 things in-house to reduce costs on your core product or to capture consumer minds. Microsoft is getting back in phones, with an Android offering. Amazon is not giving up on Kindle.

Notice how Apple is missing on the cloud services part here. They have some internally (for Siri) but they do not sell them.

Even if they don't start a cloud offering, they may sell their CPUs to others who will, before eventually rolling their own hardware.

This will give time to people who adapt existing server software to work better on Apple ARM CPUs (recompiling is the tip of the iceberg, thing about the differing architecture, what can be accelerated etc.)

We are seeing SIMD/AVX optimization for database like computation just now. It may take a while.


Apple is not missing out because it doesn’t jump on every bandwagon that is not part of its core competency. It’s still the most profitable out of all the tech companies.


If youtube is considered a failure vs tiktok then I don't know what success would be.

Same with Hangouts and video calling.

Tiktok and zoom are both flash in the pan fads.


Different use cases and monetization profiles.

Youtube requires a lot of server space compared to tiktok (<10 min means no ad money, so people make videos at least 10 min long!), and Zoom requires almost no space, while it can sell corporate subscriptions.

The only reason youtube still enjoys some success now is because it wasn't made in house, and the acquisition wasn't too badly managed. Grandcentral (parts of which still live as google voice) was a different story.

But it only shows how the last success google made "in-house" was a long, long time ago. The alphabet rebranding changed nothing. Since youtube, Google has turned into another yahoo for startup: a place they go to shrivel and die.


Citation needed


Ultimately moving data is the prime power consumer in a data center, so unless you are somehow drastically reducing the amount of data movement, you are not going to get drastic energy savings. This remains true even inside systems and CPUs, fast and wide buses and caches require lots of power, the main power cost in wide SIMD (AVX-512 vs. AVX2) isn't the computation itself, it's getting the data to and from the ALUs.


> Writing software on ARM laptops that are deployed to production servers running on x86 servers will start to cause a host of new challenges.

We already know what this is like, and the challenges are usually in the other direction (write/test on x86, attempt to deploy on ARM).

This is due to things like word alignment and memory ordering requirements. For the most part, the "extra complexity" in x86 allows you to ignore a lot of the stuff you need to pay more attention to on ARM.


Millions of iOS developers develop on x86 and deploy to ARM devices every day. The iOS simulator compiles apps to x86 and they run on top of an x86 version of the iOS frameworks.


The "x86 version of the iOS frameworks" is where those differences would be apparent. Apple may have done a good job with their simulator, but that's just it -- Apple did this work so that their iOS developers wouldn't have to.

As an iOS developer, you don't need to think about memory barriers if you don't want to, but Apple's simulator and frameworks absolutely do. That's the kind of work that awaits systems programmers who want to port high performance x86 server software to ARM.

x86 hardware is physically more permissive than ARM. Accesses that must be aligned on ARM can be left unaligned on x86, and they'll still work correctly. But x86 isn't going to explode either if you do happen align them. ARM is weakly ordered, x86 is strongly ordered, so you generally need more fences on ARM than you do on x86, but leaving these fences in place doesn't break things on x86.

My point was that code that works correctly on ARM usually works correctly without modification on x86. The opposite is generally not the case, even if the iOS simulator hides this for you.


People use Java and Go to write server software. They'll make things aligned properly, so it won't be a problem.

The real problem is memory concurrency model. You will hit subtle concurrency bugs with ARM which were not manifested on x86. And those bugs will be present in any language with threads and shared data. They are hard to debug and frustrating, because they'll lead to rare deadlocks, and resource leaks.


The transition to ARM at AWS appears to be well underway.

https://aws.amazon.com/ec2/graviton/


Now the question is, who's going to make those chips?

While the big cloud providers are certainly able to internalize design of their hardware, you still need to ship millions of servers to smaller players.


Ampere Altra and Marvell ThunderX are targeting this market. Qualcomm, Broadcom, and Nvidia tried earlier and gave up, but I wonder if we will see them enter back in.


> Furthermore, Apple doesn’t buy the expensive Xeon chips, used in millions of Cloud servers, that represent a growing proportion of Intel’s revenue.

While Apple is not traditionally considered a HyperScaler ( Amazon, Microsoft, Google, Facebook, Alibaba etc ), their own Datacenter is still pretty big. You would be using it when you are using Siri and iCloud. And they use Xeon.

>Margins will inevitably suffer as the ARM-based SoC field is filled with sharp competitors such as Qualcomm and Nvidia, sure to be joined by arch-enemy AMD and others, all ushering in a new era of PCs.

That is the problem. It isn't ARM vs x86. Nor Intel's Fab vs TSMC ( Although they are part of the equation ). It is their Margin.

Intel is not willing to risk their current healthy margin. And there is nothing inherently cheaper with ARM design. Intel could have lower their x86 Chip with less Margin to compete. In that case the compatibility of x86 with slightly higher premium will be able to fence off any ARM attack into PC market.

In the short term everything will still be fine for Intel. The market moves very slowly unless you are Apple which control everything in their stack. But in the long term the unit economics of x86 chip with current Intel's margin stand zero chance to compete. Unless Intel's Fab does some miracle and suddenly leap ahead of TSMC by a generation. Which as far as I can see there is zero chance happening in the next 5 years. Further out we will have to wait and see.

That is of course assuming the cost of running those Fab and their Tech being equal. Which we know that isn't the case.

And I haven't even mention AWS Graviton 2. I would not be surprised if Graviton 3 is already sampling.


PC market is minuscule compared to mobile, stagnating , slowly declining and has a long refresh cycle. Not exactly a market you want to be in and be excluded from the larger market.

Even worse for Intel, most computers over $1000 are Macs leaving Intel with the low end.


>Not exactly a market you want to be in and be excluded from the larger market.

I am not sure what you are trying to suggest.

You either go in to the bigger market and complete with lower margin. Or you stay within the market with margin and milk it for as long as you could.



> PC market is minuscule compared to mobile, stagnating , slowly declining and has a long refresh cycle.

Modern phones have 2+ GB RAM and an SoC that could easily be used in a tablet or laptop (think low end Chromebook). The PinePhone literally runs mainline Linux. Mobile is PC, it's only the physical form factor (and associated power usage constraints) that differ.


I agree but that’s besides the point. The point was that Intel relying on its moat of PC sells is not a great proposition.


Fair enough. Relying on the sale of either x86 based systems or devices with a large form factor seems like a questionable business plan to me as well.


Apple Silicon-powered Macs will be better than Intel-based laptops, at least after a few product cycles. I say "better" in terms of performance/watt.

But will their margin of superiority be large enough to cause the seismic shifts in Windows/x86 land that this article suggests?

I'm not sure.

If Apple's laptops are 25% or perhaps 50% better, I'm really not sure that's enough to cause a major upheaval.

I think we will wind up in a middle ground different from the upheaval JLG suggests.

We are already at a place in which your average laptop is "good enough for most people"; being 25%-50% better than "good enough" is a nice competitive advantage but that's not going to necessarily turn the world upside down.

- "Power users" will appreciate the CPU improvement. At this point, that essentially means video editors. It would mean gamers, but Macs still won't be the first choice for gaming.

- "Extremely mobile users" will appreciate battery improvements. However, I suspect most laptop users use them chained to a desk most of the time anyway. I would love 50% more battery life, but it's not really actually that much of a factor for me personally.

- "Cheap laptop buyers" won't be buying Macs anyway as Apple (wisely) doesn't seem like it wants to play in the sub-$1000 space. Though... maybe it will now.


Your last paragraph and your fourth paragraph are the big upheaval.

If you ask me, Apple is going to make a ~$700 passively cooled (sealed like an iPad) MacBook Air with similar/better performance to the current model.

Most importantly, it will have the same build quality and “expensive” feeling.

Sure, there are plenty of laptops now with similar build quality to Macs, but none of them are actually cheaper than a Mac. Load up a ThinkPad X1 Carbon or XPS 13 with a high DPI display and comparable specs and you’ll see what I mean.

But now you’re going to have Apple saving $200 compared to every other computer by not giving Intel a dime. The A13 chip is a $10 line item in the component cost of an iPhone.

This barely even costs Apple R&D money since it was already a part of the much larger iPhone and iPad business.

Literally nobody will be able to touch Apple on the value end of the spectrum unless they go into bargain basement territory. Are customers going to buy a $600 plastic 1080p laptop with big fans, keyboard deck flex, and 5 hour battery life or will they buy a $700, thin, fanless, retina aluminum Mac with the same performance, 10 hour battery, and a much better fit and finish?

If this all sounds a little unrealistic, I’ve got an iPad Air to sell you!


I hope and pray (to the extent that an atheist can pray) that this is what will happen.

There is zero question in my mind that Apple can do what you say, for the reasons you say.

The only potential obstacle is institutional will. You would now have a larger overlap between low-end Macs and the higher-end iPads. In my opinion, this is not a problem because I don't view those product lines as conflicting. But does Apple?


> This leaves Intel with one path: if you can’t beat them, join them. Intel will re-take an ARM license (it sold its ARM-based XScale business to Marvell in 2006) and come up with a competitive ARM SoC offering for PC OEMs.

I find this path unlikely and it depends on Intel's foundries not falling further behind TSMC. The most likely path is to compete on price-per-peformance (i.e. slash prices and internal costs) rather than performance-per-watt.


What this discussion lacks is the acknowledgement of just how titanic Intel is.

It is pretty much the biggest semiconductor company out there, focusing solely on 3-4 main product lines.

The most surprising is them under-performing so much while them having resources comparable to the next few competitors combined.

It is not only their fabs that got an arrow to the knee with 10nm.

Their newest microarchitectures deliver tiny IPC gains in comparison to AMDs products even on comparable nodes, while having significantly more transistors per core.

Tiger Lake throws 40-45% more transistors, and only gets 10% IPC gain at max, and the rest of performance gains come from better thermals.


I think you have some salient points there.

A few years down the line, Intel will look back and thank AMD for their decision to spin off GF and go with TSMC with Zen 2.

EYPC and Ryzen are keeping the x86 arch relevant until Intel is able to catch up on their next-gen process nodes.

AMD may be eating Intel's market share for lunch in a few markets, but in the long term keep ARM far enough away for it to be a niche player, both in the server space and the laptop space.


There were a grand total of 250 Million PC sold last year and that includes around 20 million Macs. That also includes PCs with AMD processors.

https://www.gartner.com/en/newsroom/press-releases/2020-01-1...

Apple alone sold more ARM based iPhones and iPads than that last year at higher profit margins. Not to mention all of the ARM based chips used for the Watch, Airpods, Apple TV’s, and HomePods.


It’s not apple that matters here. At all. Ever since they took ARM in house for their devices it’s clear they don’t want to be dependent on another corporations chip design.

What this misses, gravely I think, is that from the armchair CEO angle, I would want to go after _Qualcomm_

They’re the biggest vendor in this space for mobile as far as I can tell. All the major Android vendors use them, including in high end devices, and they’re apparently incredibly hard to work with from what I understand. This would be the way in for them to muscle into the ARM market but supplanting Qualcomms dominance.

That would make more sense to me if I was running Intel. Then again, they ditched their cellular unit so it seems this might be questionable all together


Apple doesn't want other vendors using their CPUs.


> Apple doesn't want other vendors using their CPUs.

Exactly my point, in so far as what I said: Apple’s never going to be Intels customer for CPUs once they the last Mac requiring Intel CPUs is no longer supported. That’s a dead market.

Qualcomm on the other hand, is ripe for some real competition. If Intel wants to swing big into the ARM CPU game, they should look to dethrone Qualcomm. Apple is pretty much irrelevant here. That’s been clear since they brought all their CPU designs in house


To add to JLGs points, a top of the line 2020 MacBook Air that sells for $1,450

https://browser.geekbench.com/macs/macbook-air-early-2020-in...

Is significantly slower than a $400 iPhone SE in single core, and only slightly faster in Multi-core.

https://browser.geekbench.com/ios_devices/iphone-se-2nd-gene...

The entry level $900 MacBook Air is 30% slower in single and multi-core than an SE:

https://browser.geekbench.com/macs/macbook-air-early-2020-in...

Now imagine next years MacBook Airs running the next generation A14 processors on a 5 nm process instead of 7 nm, with much more thermal head room than a phone, with far faster integrated GPUs than Intels, and with functionality like T2 integrated in the SOC. And using processors that are at least $100 cheaper.

It’s mind boggling to think of how much faster they will be, how much longer their battery life will be, and that they likely will be even cheaper.


> Is significantly slower than a $400 iPhone SE in single core, and only slightly faster in Multi-core.

Geekbench is super misleading because they do things like include benchmarks which are hardware-accelerated on the CPU in the iPhone but not the Intel ones. The weak little low power CPU in the Air is also hardly the fastest single thread Intel processor, much less multi-core, so I don't know what they're planning to do on the desktop.

> Now imagine next years MacBook Airs running the next generation A14 processors on a 5 nm process instead of 7 nm

But then it's competing with not only whatever Intel manages to come up with in the meantime but also AMD processors on the same 5nm process.

> with far faster integrated GPUs than Intels

But, again, faster than AMD's? Or what Intel will have at the time, given that they're now developing discrete GPUs and will have that in house to improve their iGPUs?

> And using processors that are at least $100 cheaper.

Which Apple has no incentive to give to you instead of putting in their own pocket, especially if what they offer is actually better.

> how much longer their battery life will be

"Much more thermal head room than a phone" and "much longer battery life" don't go together, they're a trade off against one another.

The amount of hype around this is getting to a level that people are going to be very disappointed if it doesn't live up to it.

I'm waiting for some independent third party benchmarks before getting so excited. And not the first ones which were given early access in exchange for favorable skewing.


>"Much more thermal head room than a phone" and "much longer battery life" don't go together, they're a trade off against one another.

Every time I see someone write about the benefits of "Apple Silicon" they always mix this up. What they do is say that the ARM chip is much more power efficient and therefore it will have a longer battery life and can be cooled passively. In the next sentence they say you can sacrifice battery life for higher performance and add more powerful cooling. Yet for some reason people seem to think that you can have both at once without any of the downsides. ARM based SoCs are usually power efficient because of asymmetric cores and low frequencies.

AMD created a symmetric 8 core chip running at 10W simply by cranking the frequency down to 1.8GHz. If they add asymmetric cores they could lower power consumption even further.


>Geekbench is super misleading

SPEC is the industry standard for comparing performance across platforms.

>Last year I’ve noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. In SPECfp2006 the A13 is still roughly 15% behind.

https://www.anandtech.com/show/14892/the-apple-iphone-11-pro...


So now we've gone from being "significantly faster" to being as fast on integer, slower on floating point and slower on threaded applications. Which seems a lot more plausible, but now where's the argument that this is going to be a marked improvement rather than a lateral shift with a huge transition cost?


This is a comparison of an iPhone part to a laptop/desktop part. The iPhone part doesn't have anywhere near the same power budget, nor does it have active cooling. The SPEC benchmarks take a couple of hours to run, so cooling is definitely a factor.

Apple's laptop part will be the first time they have a level playing field on both.

The Apple laptop part is also going to have a process node advantage. It will be built on TSMC's 5nm node while AMD is sticking with an enhanced version of TSMC's 7nm node and Intel will still be on 10nm.

The word was that Apple's entry level laptop part will have 8 next generation Big cores and 4 next generation little cores, which is quite aggressive for an entry level laptop part. The A13, in comparison, only has two big cores.

We'll see how Apple's laptop chip fares when it ships.


> The iPhone part doesn't have anywhere near the same power budget, nor does it have active cooling.

This is much less of an issue for single thread performance because a single thread isn't going to exhaust an 8-core CPU's full power budget anyway, or if it does it's by boosting to very high (i.e. less power efficient) clocks which only give a few extra percent more performance. The main thing the extra power budget gives you these days is the ability to have more cores, which x64 processors already have.

> It will be built on TSMC's 5nm node while AMD is sticking with an enhanced version of TSMC's 7nm node and Intel will still be on 10nm.

AMD already has 5nm processors on their roadmap. Apple might release theirs first, but having a modest advantage for a few months until your competitors release their next design isn't especially groundbreaking. It's basically the expected state of affairs when companies compete with each other and neither has a clear advantage -- the one released most recently is a bit faster and then soon the competitor has a release and it goes back the other way.


What is on your future roadmap doesn't matter at all, if it's not going to ship in time to compete with your rivals in the current generation.

In the next generation, Apple is going to gain a process node advantage, and level the playing field for the first time in respect to power and thermal envelope.

Given that they were already in the same ballpark as Intel and AMD without any of those advantages, we're in for a sea change.


> What is on your future roadmap doesn't matter at all, if it's not going to ship in time to compete with your rivals in the current generation.

The point is that "current generation" doesn't work like that. If AMD releases Zen3 on 7nm and Zen4 on 5nm and the ARM Macs arrive half way between one release date and the other, which one do you want to compare them to? Neither would be strictly "fair" because one or the other would have an advantage of several months of advancement one way or the other. But doing all of this just because they couldn't wait a few more months for Zen4 doesn't make much sense.

We're not going to really know until ordinary people can get it in their hands and benchmark it.

The whole thing strikes me as Apple seeing Intel stagnating and starting a long-term process in motion several years ago which was too far along to stop by the time AMD brought competition back to the market again.


>If AMD releases Zen3 on 7nm and Zen4 on 5nm and the ARM Macs arrive half way between one release date and the other

Zen 3 and Apple's laptop chips are both due in the fourth quarter of this year.

Apple is going to have a process node advantage at least until Zen 4 ships, which definitely will not happen only a couple of months after Zen 3 ships.

However, despite Apple having a better hand to play than AMD or Intel for the first time, I agree that the proof is in the pudding.


Interesting. These benchmarks confirm my thoughts about the energy efficiency of Apple Silicon. As you get closer to AMD and Intel performance your energy efficiency will drop and also approach Intel & AMD. Apple Silicon is the real deal when looking at its performance but it's also equally power hungry.


> Geekbench is super misleading because they do things like include benchmarks which are hardware-accelerated on the CPU in the iPhone but not the Intel ones.

This was a problem in Geekbench 4 (especially so on Android) and was resolved in Geekbench 5.

You can believe the numbers to be roughly accurate within the sampling variation.


If Geekbench was so misleading, why does the Intel version of Geekbench run so fast on two year old iPad chips? And “Hardware acceleration” isn’t misleading if it’s accelerating commonly done tasks.

Intel is years behind on process and GPUs. Apple is not just getting faster CPU and GPU performance in lower power envelopes, it’s getting those processors at less than half the price. Apple could have switched to AMD to get better faster processors cheaper, but nowhere near half as cheap, and nowhere near as power efficient.

And Apple has no reason to use this to increase margins, when they can use it to dramatically increase volume and market penetration, which will increase margins on its own. Fir example, the iPhone SE is faster than any Android phone ever made, yet Apple still priced it at $399. They wouldn’t risk cannabalizing their higher end phones unless SE margins were comparable to higher end phones. Which again shows you how cheap their Ax CPUs are.

And iPhones and iPads have all day batteries a fraction of the size of laptop batteries, and without the space, mass, or fans of laptops. The A14 will be able to run in MacBooks at a higher power level while still remaining cool, providing for more cores and faster processor speeds.

It will still be using significantly less power than the latest generation Intel laptop CPUs. Also because the SOC will integrate a performance GPU, T2 functionality, Neural Engine, etc all in one package.


> If Geekbench was so misleading, why does the Intel version of Geekbench run so fast on two year old iPad chips?

If Geekbench wasn't so misleading, why do they do worse on other benchmarks?

https://benchmarks.ul.com/compare/best-smartphones

https://www.gsmarena.com/benchmark-test.php3

> And “Hardware acceleration” isn’t misleading if it’s accelerating commonly done tasks.

Hardware acceleration is misleading because it can give you huge speedups in a single measurement which significantly affect the overall score when it's averaged in, and because it's easy for the competitor to hardware accelerate the same thing if it really matters without losing any of their advantage anywhere else.

> Apple could have switched to AMD to get better faster processors cheaper, but nowhere near half as cheap, and nowhere near as power efficient.

With the amount they have to spend on R&D in order to do this, it's hard to argue "cheap" and the AMD processors have TDPs down to 10W, so it's also hard to argue efficiency.

> And Apple has no reason to use this to increase margins, when they can use it to dramatically increase volume and market penetration, which will increase margins on its own.

Lowering prices never increases margins. It may increase volumes, which reduces amortized costs, but there is a grisly trade off there. If you're selling a phone for $400 which has a $200 unit cost, lowering the price to $300 is probably not going to double your sales but it is going to half your margins.

> Fir example, the iPhone SE is faster than any Android phone ever made, yet Apple still priced it at $399.

Meanwhile this is the cheapest iPhone and it still costs significantly more than the average Android phone, which is around $250.

> And iPhones and iPads have all day batteries a fraction of the size of laptop batteries, and without the space, mass, or fans of laptops.

Predominantly because they have smaller screens, and screens consume more battery than a mostly-idle CPU.

> The A14 will be able to run in MacBooks at a higher power level while still remaining cool, providing for more cores and faster processor speeds.

You still seem to be confused by this. "A higher power level" and "remaining cool" are the opposite of one another. If they use the power budget to have as many cores as e.g. a Ryzen 7 4700U then it will probably use about as much power on the same process node. They're not magic, they're just making different design trade offs for phones than AMD and Intel make for laptops.


Yeah you’re completely missing the mark on the iPhone SE. You physically cannot but an Android device with a faster CPU at ANY PRICE. 1500$, hell 15,000$ cannot get you a faster Android device than the iPhone SE. It’s a great metric for the future of Apple Silicon on desktops and laptops.


But the point isn't whether it's faster or not, it's that they have no reason to give the margins to you instead of keeping them. OP was making out like selling for $399 was some kind of a low price. There are sub-$50 Android devices. Their lowest priced iPhone costs significantly more than the average-priced Android device. Apple has demonstrated no interest in lowering their margins to chase volume.

Also, you can install Android on x64 tablets with basically equivalent single-thread performance and significantly better multi-thread performance. Hardly anybody does this because hardly anybody cares that much about performance on that class of device as long as it meets a particular threshold that any non-garbage modern CPU (including many Qualcomm chips) already does.


399$ is a low price. It is not the lowest it could be, sure, and Apple will indeed never lower their margins to chase volume.

But like they did with the SE, they will lower their parts costs to chase volume. Like they did here, by keeping the same iPhone production lines that built the iPhone 8 going (I'd presume it would be the same production lines that built their phones all the way back to the iPhone 6).


$399 is not a low price. $50 is a low price. $250 is a medium price. $399 is a medium-high price, but that's as low as Apple goes because they don't want to cannibalize sales of their very high price models.

> But like they did with the SE, they will lower their parts costs to chase volume.

This has very little to do with their parts cost. Their problem is that smartphones are increasingly becoming a low margin commodity because the existing hardware is "good enough" and many people would rather save hundreds of dollars than have a phone which is marginally faster in a way they can barely notice. So the price of Android phones goes down over time and on some level they have to compete with that.

But that's true independent of their costs. If Android phone prices get lower and Apple's production costs stay the same, they have to lower prices at the expense of margins or they'll start losing volume to Android. If Android phone prices stay the same and Apple's costs go down, nothing is making them sell for a lower price, so they just get to enjoy higher margins.

In theory there is an optimal sale price which maximizes profits and changing unit costs can affect what it is, but it isn't even guaranteed to be lower when margins are higher. Apple's brand is as a luxury product. If they lowered prices too much at the low end for any reason, that impairs exclusivity and their ability to extract very high margins at the high end.


You don’t get to give an opinion on what’s the numerical value of a low price without any facts to back you up and then deny me doing the same. That’s preposterous.


One big advantage Apple will have is that they control the OS and the chip, so adding custom instructions or other custom chips that integrate fully with the OS (like the T2 chip), that kind of symbiosis is something Microsoft can't achieve with x86.

Also dare say we may see a revival of the hybrid chip design become more fashionable, with virtualization matured, x86 and arm on the same box may become a thing in some area's. Actually a few companies have tried that in the past and one that jumped out (was looking for Nvidia who was at one stage working on such a chip) https://www.extremetech.com/computing/185888-vias-new-isaiah...

>It’s mind boggling to think of how much faster they will be, how much longer their battery life will be, and that they likely will be even cheaper.

I hope so as there has been for a while (seems to of hit the apex of change) been a case of lower power chips meant they could use a smaller battery and make it lighter and smaller. That with advances in screen resolutions and panels soaking up much of that gain in CPU power savings for laptops and mobiles, we have or seem to of hit that point were gains in power are now being handed back to the user and the drive/marking of last a day or two seems to be more weighted over - hey - this is as light as a feather drive.


> One big advantage Apple will have is that they control the OS and the chip, so adding custom instructions or other custom chips that integrate fully with the OS (like the T2 chip), that kind of symbiosis is something Microsoft can't achieve with x86.

I'm particularly excited by the idea of Apple adding iPhone type security to their laptops.

I get all the counterarguments about owning your own device and freedom to tinker.

But I, personally, would be willing to make that sacrifice. Particularly since it's pretty easy to get totally open devices, whereas getting a secure laptop is not so easy.


With the R&D costs involved, there's no way that Apple does this to save costs. It makes even less sense financially considering that unless Apple is planning to make Macs with much higher volumes than before... (which would be a bold bet)


Those R&D costs are a bit shared with their whole ecosystem, so whilst some features for a desktop based version of the CPU would not be viable on a phone today or an iPad, down the line they may will filter in.

So they kinda already have the volume to cover R&D costs thanks to how well their phones and tablets have sold.


No, I don't think so. Intel squeezes Apple dry on CPU pricing. Intel squeezes everybody dry. There is no client big enough for them to loose.


Expecially with the low volume for the high end Mac Pros they'll have few choices for those with the transition:

1) Either they are going to sell those chips on the wider market, which probably won't ever happen, or

2) Eat the costs and make the machine unprofitable to retain a market, which isn't too likely either, or

3) Make the machine less expensive and trying to do a high-volume play out of it, or

4) Massively raise the costs.

There's no way out apart from those, which raises questions on what they'll do.


5) Treat the Mac Pro chips as a line item in their R&D budget for future iPhone and iPad CPUs.

6) Maybe Apple will never build "Mac Pro" class CPU. Instead the Mac Pro ekes out an existence with a higher thermal budget and more GPU/FPGA accelerator cards. (Imagine if Apple released many more FPGA programs for their Afterburner and then let you add two or even four into a Mac?)

7) Use their most powerful chips to build a games console and crush out Nintendo like Sony did to Sega twenty years ago.


For 1. and 3., sell the chips to specialized hosting companies who want to try things. Scaleway had a ARM hosting line.

There are many innovative and competitive hosting companies in Europe that take advantage of the cheap bandwidth. They won't sneeze at cheaper hardware.


Apple already did #4 - massively raise the base cost of the 2019 Mac Pro, so it could be that this is priced in already.


5) 40 and 50 core versions of the A14 that cost a fraction of Intel Xeon CPUs and offer similar performance.


That's just wishful thinking. Mac Pros don't have the sales volumes to ever recover the necessary R&D. Apple is also completely unwilling to sell their chips to competitors. Building a server grade CPU requires a completely different expertise. 99% of the work is in the interconnect, interfaces like DRAM, PCIe. Apple can do all of that except it will take half a decade and only lose the company money.

Instead of having such a foolish dream it makes more sense to just find an existing ARM server vendor.


Apple is definitely going to go for higher volumes. Imagine how well a better faster entry MacBook Air sells at $750 instead of $900. They can do this while maintaining their margins at the same levels.

And additional R&D costs are a drop in the bucket. They already have a big team to design processors, next year will be their 14th version. Every year they have a large team rev MacOS.

If they spent $1B a year on R&D for the Apple Silicon transition, they will get double that back the first year on lower CPU costs on a $25B sales rate. Except their sakes are going to increase substantially. And they won’t even spend half a billion total on R&D for the entire two year transition, Rosetta already existed, the A14 was already being designed, etc, etc.


Some serious mental leaps taking place here, well done. I take issue with the assumption that Apple's behavior in the (increasingly incredibly niche) MacOS ecosystem will somehow change Microsoft's strategy in any way. This may be the passing of Mactel, but I doubt Wintel is going anywhere anytime soon.


Microsoft changed strategy with Windows 8 trying to make it tablet friendly because of the iPad (even though it shouldn’t have) and the entire reason behind the Surface line was because other OEMs could only make cheap commodity computers that didn’t compete with Apple on the high end.


I tried to find some evidence to back up your assertion that macOS is losing market share, but failed. Everything I can see shows that macOS is either holding steady or slightly gaining market share.


From the guy who thought the AT&T Hobbit was going to take over the industry.


Keep prophetizing, one day you nail it.


Interesting read and the aspect that we may see ARM desktops become more common, based upon Microsoft making more an effort for their ARM support is also credible - only recently saw Windows 10 running natively upon a Raspberry Pi4 - https://www.youtube.com/watch?v=BTT8GlsqXs8

Equally the Linux desktop offering have advanced and by that, more palatable for the average user for a desktop.

Certainly when it comes to most desktops in business, something that can open word and email and odd spreadsheet and powerpoint, that's the bulk of usage right there. Small lower power usage alternative like Rpi4 would do those jobs fine as for most users. So Apple may actually shift that cultural hurdle that prevails in business, maybe.


There's already the Microsoft Surface Pro X that uses ARM. It also has a translation layer of some sort like Rosetta that allows you to run x86 (but not x86_64) Windows programs on it.


As much as I like Apple products this is wishful thinking, outside North America and a couple of tier one countries, Apple is nowhere to be found unless one belongs to a rich family that happens to bring an Apple device from the outside during a business travel.


Gassée actually significantly understates the power consumption difference between the Intel chip in his MBP and an iPad Pro. MBP 13s sustain around 35 watts of power indefinitely. An 12X sustains only around 10 watts.


Yes, but I read this:

> think faster, svelter laptops actually lasting 10 hours on a battery charge.

...and I instantly think of HP's Envy x360 w/Ryzen 4500U pulling almost 14 hours of battery life in productivity tests.

I Apple wants to stand out, it will have to push closer to 20hs on a charge. Not saying they can't, saying it'll be a stretch. And great if they pull it.


I'm mainly a Mac user, but recently bought one of these Ryzen laptops. The performance is amazing, especially for a low end / "budget" laptop.


I'm not sure Apple will be able to exceed Ryzens[0] performance (as perceived by the user). They could get to match it, and of course having tighter integration between hardware and OS will give them opportunities to optimize and tune beyond what Windows and Linux can do having to support a wider range of hardware.

[0] I'm considering Zen 2 "the best performing x86" (with a small grain of salt). If Intel wants to take that crown with Tiger Lake I'm all for it.


is there a possibility that Apple starts gaining such a lead that none of the others are able to catch up. Could we all be running Apple hardware in say 7 years because the rest never could match the performance of Apple silicon, due to the late start?

like what are the odds of qualcomm today matching the A12 in 5 years even?


Highly doubt it - even though apple is a giant you can't underestimate everyone else who are also massive giants. Qualcomm until now may had no real reason to step out of its incremental performance increases with every generation since almost everyone on Android camp uses its SoC's for their flagships anyway. Similar to how Intel has been slowly pushing small gradual upgrades to their to CPU's until AMD managed to do something better and now we see Intel pushing more cores than before to catch up.


Im truly ignorant on this. Who is Qualcomms competition ? given that all android phones are running just their chips, whats their incentive to catch up to Apple when they can just waddle along their current path.


Apple taking over the arm laptop market is now their competition. Qualcomm was to gain big if Windows on ARM took off and chewed off a part of the x86 marketshare, but as the author of the parent comment suggested there's a chance apple could leapfrog everyone and they won't be able to catch up meaning less windows devices. qualcomm would be at loss.


thank you for explaining. Im also the original commentator lol.

I still worry about us entering in to a monoculture.


Sorry missed that :)


> Who is Qualcomms competition ? given that all android phones are running just their chips

Mediatek, Samsung, HiSilicon (Huawei) are the high-end competition that produce competitive smartphone SoCs with the latest ARM cores.

Rockchip and Allwinner are the low-mid-end competition that produce a huge volume of low cost smartphone SoCs.

For smartphones, since anyone can license an A76 (or better) core from ARM themselves, Qualcomm's key moat is really in their modem IP - where the competition is Huawei, Unisoc, and whoever ends up purchasing Intel's out-to-pasture 5G division.

Amlogic, Broadcom (Raspberry Pi), Cavium (Servers + NAS appliances), Renesas (Automotive), NVIDIA, Ampere (Servers), and now Amazon (Graviton2) are all major ARM SoC manufacturers somewhat outside the direct Android smartphone space.


And it still doesn’t. The high end Android market is minuscule.

You can see how little it threw at creating competitive processors for wearables because the market just wasn’t there compared to Apple.


That's not true at all, just Samsung Galaxy top model sells about half as much as Apple sells iPhones and that's in USA where iOS has the highest market share. If you add other brands to the mix you can't call high end android sales miniscule.

Look at https://www.google.com/amp/s/www.androidcentral.com/samsung-...


That article says 2 million phones. Do you really think Apple ships only 4 million phones in the US? It also only includes the XR.


It talks about 1 quarter and one model from each brand.


That’s the issue. Samsung sells a lot of low end crap under the Galaxy brand also. If they didn’t, their ASP wouldn’t be so low.

https://www.gizchina.com/2020/06/14/the-average-selling-pric...

The article is bragging that the ASP of all Samsung phones is $292. That’s $100 lower than the lowest cost iPhone.

Apple also captures 66% Of mobile profits.

https://www.counterpointresearch.com/apple-continues-lead-gl...


Low because there is no demand for high end processors.


In the meantime, can anyone recommend a desktop workstation-like device to start seriously playing around with daily driving ARM? Something more powerful than an rpi4 but with good open source support.


Pine64 is working on a great set of devices:

https://www.pine64.org/

For the workstation they don't have anything much above a RPi4 but they have an interesting MiniITX form factor for a 7-way cluster and are even working on an IP camera. Really cool set of devices.


Probably something like this: https://www.solid-run.com/nxp-lx2160a-family/honeycomb-works...

Multicore benchmarks say it is around the performance of 3 year old desktop chips. Not bad but the single core performance is probably awful.

https://www.phoronix.com/scan.php?page=news_item&px=HoneyCom...


I don't see how Windows would be able to compete on ARM TBH. Nobody is even close to the performance of Apple chips, and that gap is likely to grow over time as Apple piles on more and more specialized compute and takes full advantage of its vertical integration - something Microsoft cannot do. It's really nice to have a fairly restricted set of hardware to develop for. It's really nice to be able to just know that e.g. all your devices have a TPU, and a certain amount of GPU compute bandwidth, and certain hardware codecs, etc. On the Windows side this will never be the case unless Microsoft magically commits itself to chip design the same way Apple did, but even under the best of circumstances this will take a decade or so. Android is similarly handicapped.

Intel will be fine for at least the next decade. Businesses will still run on Intel, as will cloud. They'll figure out their process trouble and retake the performance crown again from AMD, although by a narrower margin. All of this has happened in the past. This is due to one simple truth: businesses couldn't give less of a shit what hardware their software runs on, and moving everything to ARM is so expensive, it's not going to happen for existing apps.

Most of us here would have a bout of OCD knowing we're not using the latest and greatest thing, but businesses don't have such a problem.


x86 and it's ancestors eating the markets of [iAPX 432, Itanium, i860, i960] has taught a lesson in the value of an install-base in the server and desktop market. Let's see if history repeats itself?

Apple has never seemed to really be into the concept of backward compatibility; but historically the rest of the industry has been.

Whatever happens, I'll bet it won't be boring.


So, because Apple is moving their general computing devices to ARM-based CPU/GPU, the whole world is going to do the same, Intel will die and world peace will be achieved?

Hold your horses. No one even knows how it will perform, how Rosetta 2 will perform and if Intel is not going to hit back as it did with Qualcomm and Microsoft.

Seeing is believing, after all.


It’s already clear they will be significantly faster, mobile battery life will be significantly longer, and Apple Silicon will be significantly cheaper than I86 processors. The A12 and A13 already demonstrate this in much tighter thermal packaging than MacBooks or iMacs, and they use last generation process size.

And Rosetta 2 has already been benchmarked, and the results were amazing even on a two year old iPad processor.

Intel has nothing to hit back with.


A few people have already published results of Geekbench 4 of the DTK and the performance is impressive already, specially considering that the DTK is using a 2-year old chip, in an emulated environment. The chances that Intel will hit back is fairly small. In the last several years we have seen a gigantic brain-leak from the company, and is unlikely that they'll keep being reference for performance for much longer.


Does anyone know if Microsoft has (or is working on) something similar to Apple's Rosetta emulation system? It seems to me this is the critical component that allows a transition to ARM to actually go forward without having to wait for every last developer to port their applications over.


The ARM Surface Pro X took a sort of opposite approach to Apple in several ways.

Rosetta: usually recompiles once ahead of time, only supports 64-bit

Microsoft: seems to recompile on the fly, only supports 32-bit

Also notable is the fact that the leaked benchmarks show the A12Z is generally faster when emulating an Intel CPU with Rosetta 2, than the Surface Pro X is when running natively, and the A12Z isn’t even Apple’s real desktop cpu, it’s just what they had on hand from the iPad (!)


And A12Z is a two year old A12X rebranded (they literally just enabled a GPU core turned off before for yields), using the exact same die.


This already exists in an early stage, ARM Windows can run 32-bit Intel binaries. Extending that to 64-bit binaries is the obvious next step.



I'm not sure what I'm looking at there; can you add a bit of context?


It's adding the linker flags for the ARM64EC architecture in Windows... which represents the x86_64 on arm64 execution mode to the WinRT C++ stub generator.


One Question: Why are they using the word "Apple Silicon" instead of "ARM" ?

Is this marketing move?


Marketing, probably, but also accuracy. Apple integrates a bunch of specialized co-processors as well. They want you to think of this, and the Arm bit, as one unit.


Yes. They want to take as much credit as they can. This isn't a dig at Apple, it's a rational decision.


Because you can't buy a Chromebook with the "superior" Apple Silicon. You can only buy a Chromebooks with an "inferior" ARM chip. I personally like that naming. It gets the idea across. Apple is not part of the ARM ecosystem. Apple is it's own isolated island.


This is wishful thinking. ARM was invented during the 1980s so is 'old' too. Intel tried to move from x86 with Titanium but that failed and AMD released the x64 extensions eventually causing Intel to put the full effort into those extensions too. Intel is moving to hybrid CPUs with Lakefield so the mixture of energy efficient and high powered isn't restricted to ARM anymore. Most people in business simply won't mix x86 and ARM binaries on Windows so they're going to be one field or the other. Those same people want to run those Windows tools at home too. ARM Macs will be a niche product for those looking to spend excessively.


> This is wishful thinking. ... Most people in business simply won't mix x86 and ARM binaries on Windows so they're going to be one field or the other. Those same people want to run those Windows tools at home too. ARM Macs will be a niche product for those looking to spend excessively.

I think this ignores the historical reasons for x86 success. Intel made their bones by crushing Unix boxes and mainframes with high volume x86 parts. PC CPUs were the highest volume microprocessor for decades, and they leveraged that into making high performance, yet low cost server and workstations parts to buff up their margins.

Well, the tables have turned - ARM is far, far higher volume than x86 due to its ubiquitous use in smart phones. And ARM is going to do to Intel what Intel did to IBM, Sun, SGI, HP, DEC, etc. The problem is the same as it always has been: NRE (fixed costs) for new chips climbs ever higher and needs to be offset by even more volume to be acceptable to consumers.

ARM won that game in 2007. It's just taken this long before market forces and product maturity have forced Intel's hand. To be fair, Intel may have staved off the inevitable for a few years without their well publicized 10nm production stall.


> The problem is the same as it always has been: NRE (fixed costs) for new chips climbs ever higher and needs to be offset by even more volume to be acceptable to consumers.

While this is certainly true for bleeding edge nodes, hasn't the NRE become significantly lower for mature process chips? I would guess that EDA tools and a greater availability of IP blocks would drive down the real dollar cost of designing your own ASIC on a node 1-2 generations back.


> While this is certainly true for bleeding edge nodes, hasn't the NRE become significantly lower for mature process chips? I would guess that EDA tools and a greater availability of IP blocks would drive down the real dollar cost of designing your own ASIC on a node 1-2 generations back.

Honestly, I'm not familiar enough to have a nuanced opinion, but from what I understand, that's probably true.

However, in the battle for cloud VPS and PC CPU sockets, using a 1-2 generation old process doesn't appear to be a winning strategy.


>ARM is far, far higher volume than x86 due to its ubiquitous use in smart phones

Smart phones, tablets, TVs, projectors, routers and also refrigerators, 3D printers and ovens.

Their volume is 10x to 20x times bigger than PC and laptops.


Yeah, I don't understand this leap from Apple running ARM to Microsoft running ARM. No one can match the performance of Apple's ARM chips. If Microsoft uses ARM, it will be significantly slower than Apple's and Intel's chips, and that doesn't make sense.


I think there is an assumption that Apple will release products at lower price points without the Intel tax. This is somewhat supported by their greater focus on services and lower cost products like the iPhone SE.

If this is the case then the problem for Microsoft is losing Windows market share to macOS.


It really isn't because most of the Windows market is either corporate or 3D/Gaming related. In those markets Macs are still a niche product by a huge margin.


While I would love for Apple to lower prices, I don't see them doing so. It doesn't fit into their pricing pattern over the last decade or so.


Just look at the iPad.

The $329 10.2 inch iPad is a great buy. The $499 inch iPad Air is basically a rebranded first generation iPad Pro with a better processor.


Yes, that's the one outlier, but it's also the low end item in a relatively stagnant lineup. They haven't dropped the price on the iPad Pro, and most Macs have gone up in price in this period.


According to Ming Chi Kuo, who is one of the more accurate Apple prognosticators, the iPad will be moving to the low entry level price/high performance chip strategy of the iPhone SE later this year.

>Kuo predicts that the releases will "follow iPhone SE's product strategy." Specifically, he says that the models will be affordable, and adopt Apple's "fast chips" similar to how the iPhone SE used the chip found in the flagship iPhone 11 and iPhone 11 Pro models from the fall of 2019.

https://appleinsider.com/articles/20/05/14/108-inch-ipad-exp...

It would certainly be interesting to see Apple apply this same strategy to an entry level laptop.


The iPad Air is basically an iPad Pro without FaceID.

Apple bought a lot of the features from the iPad Pro to the Air and low end iPad including pencil support and the keyboard attachment. They also finally left the 9.7 inch form factor to change the size to 10.2.

The iPad is plenty powerful enough for most users.

The MacBook Air has been $999 forever outside of a brief time when it was more expensive and a usable Mac Mini has usually cost $799. Not $699.

The iMac have also maintained their prices.


Yeah the company selling a $5 piece of metal for a cool K will surely pass those savings on to the consumer.


ARMv8 was a modern redesign. Macs aren't running anything close to the ARM ISA of the '80s.


That same argument can be made for x86-64.


Intel chips still support older versions of x86. Apple only supports ARMv8.


Both are 'old', but one of them was a clean sheet design and the other was a stop-gap processor designed to sell a few units until their iAPX 432 was finished.

See "History of the 8086" from here: http://www.righto.com/2020/06/a-look-at-die-of-8086-processo...

See "History of the ARM chip" from here: http://www.righto.com/2015/12/reverse-engineering-arm1-ances...

It's depressing to think about how Intel was putting so much energy into a chip that never succeeded and succeeded with a chip that they never intended to keep around.


Except Intel is a half decade behind in process. Rosetta2 will run all the i86 binaries as fast as on Intel at same price points. Microsoft will have to double down on ARM for Surface to avoid Windows laptops becoming entirely uncompetitive.


Intel has been behind before, and eventually the surges ahead. I honestly think that is going to happen yet again. For the next 5 years Apple's silicon might be the top, but in 10 years? I'm not so sure.


Apple has yet to show any silicon that scales significantly above laptop CPUs, which is not trivial. Their achievement as a from-scratch, independent vendor is surely impressive, but let's keep in mind that all performance comparisons between the architectures so far are based on a single benchmark.


As someone who needs some decent Mac firepower, it does concern me a bit that we don’t really know if Apple can make a good high TDP processor, and everything you need to support that, like a good GPU.

But…does that matter all that much to Apple? They seemingly have made a recommitment as of late to the pro and prosumer market (see: Mac Pro), but for a long time there, they sold laptop Macs, and begrudgingly sold desktop Macs because, I don’t know, education sales. If they nail MacBook Air-class power envelopes, which I have absolutely zero doubt they can, and they don’t knock a new iMac Pro out of the park…is that really going to hit their bottom line all that much?

(Yes, in some way, there’s a halo effect, and developers won’t be happy, nor will creative pros, but if they get great MacBook Airs with higher profit margins, then they probably consider that a win.)


I may be mistaken, I'm not nearly knowledgeable enough about hardware to say this with full confidence, but wouldn't the very low power and heat requirements of their laptop-CPU-competitive chips suggest that they have a lot of room to increase performance simply by increasing clock speed?


No, because voltage-frequency scaling is very non-linear (and process and design dependent) and designs often just have a "brick wall" they can't progress over.

For example, it was practically impossible to get Zen 1 CPUs over 4 GHz on air, not because they got too hot, but because they'd just crash almost immediately, regardless of voltage. Similiarly you can't push the current-est Intel generation beyond ~5.3 GHz, regardless of voltage. Current Zen 2 CPUs are highly binned, some bins cannot go beyond ~4.2 GHz, regardless of voltage and cooling, some can do 4.7 GHz.


The slowest operation on the entire CPU has to finish within a single clock cycle. This is basically a design issue. If you design your CPU for 6GHz then the slowest operation isn't allowed to take more than 0.17 nanoseconds. This gets harder the tighter the tolerances are but we can go way beyond what current CPUs run at. The reality is that power consumption/heat prevents us from running CPUs at 6GHz so we don't build CPUs that can run at these frequencies. The manufacturing process can also play a big role (some processes are more efficient at ~2.x GHz than 4.xGHz) but it's biggest impact by far is reducing power consumption to allow higher frequencies.


This.

I can't wait to see what Johnny Srouji's team can do when they're not targeting a mobile device.


Intel surged before because it had the volume of sells thanks to the PC market to throw at the problem.

Apple alone sells more ARM based devices than the entire personal computer market and probably the server market. Apple also has more money to throw at it. Intel couldn’t make 4G/5G work and sold that division to Apple.

On the server side, AWS is designing its own chips (disclaimer I work for AWS but nowhere near that division).


10 years is an eternity in tech.

Even today, AMD is very competitive against Intel and I think JLG's point was that Apple's move to ARM will boost the entire ARM ecosystem significantly.

Qualcomm and Nvidia won't miss a chance to try to dethrone Intel. AMD, too.

Intel could be in danger.


AMD was ahead of Intel in the Pentium 4 era, ATI (now AMD) was ahead of Nvidia during that period as well. Then there have been back and forth for the past 15 years.


Just take a look at this 2013 HN comment:

https://news.ycombinator.com/item?id=5589410

Kinda prophetic right?


> Intel execs know they missed the Smartphone 2.0 revolution because of culture blindness. They couldn’t bear to part with the high margins generated by the x86 cash cow; they couldn’t see that lower margins could be supported by unimaginable volume

Clayton Christensen‘S Innovators Dilemma at work.

It’s amazing how Apple with just 7% of market share can/could change the entire industry. They did the same with Flash(for the better) and this time with the PC. Interesting times ahead


(1) Many comment writers on this site have been predicting the replacement of AMD64 with ARM in personal computers for about 12 years.

(2) Apple's announcement had no discernible effect on Intel's market cap, which, even after adjusting for inflation, is higher today than it has ever been except for an interval of about 2 or 3 years at the end of the dot-com boom.

(I used a CPI calculator, a market-cap chart and one of my eyeballs to estimate the duration of this interval.)


I love how people like to talk about Apple vs Intel.

But Intel already had been beaten by AMD, and for the foreseeable future as well. Even on the mobile side, AMD has more performances with less power.

How will Apple stack up with AMD is the real question. This move is great for their low powered devices. But when it comes to The Pro devices, I'm sure they would have more performance for the next half decade if they just moved to AMD.


Earlier discussions assumed some amount of cloud processing to cover for less powerful chips. Is that not the case?


Not needed now. My A12 equipped iPad has considerably more grunt than my i7 intel laptop.


>My A12 equipped iPad has considerably more grunt than my i7 intel laptop.

I know this meme gets brought up a lot, but has this claim been backed up by benchmarks that's not geekbench?


Yes, the arm Mac developer transition kit which runs a much wider variety of software with the same chip (including x86 SW) answered that.

At IDA Pro, the DTK is actually faster than a 16 inch MBP in practice... (and that's with an x86_64 copy of IDA... as an arm64 one doesn't exist yet, workload being tested was loading an iOS kernelcache)


Yes, SPEC, which is a real world suite of applications, like GCC: https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-re...


I was too lazy to collate the numbers together, so I found this comparison chart https://old.reddit.com/r/iphone/comments/9lup3m/iphone_xs_sp.... The A12 looks faster in some cases, but not by much overall. It definitely doesn't support the claim that it has "considerably more grunt" than an i7.


Some comparison points. It's not all about hard benchmarks but what workloads you throw at the devices.

1. The A12Z in my iPad has 2 more vortex cores than the base A12 and 4 more GPU cores.

2. I'm comparing to my laptop, a T470 with i7-7600U in it.

3. My (I did explicitly say that) laptop's modern equivalent, an i7-8565u, is dog shit compared to the iPad still and costs 100% more.

Some benchmarks if we need them:

https://browser.geekbench.com/processors/intel-core-i7-8565u

https://browser.geekbench.com/v5/cpu/2898525

Now if you do any video processing on the ipad, it'll chunk through 4k like my Ryzen 3700X does if it's HEVC and barely even get warm, but that's the T2 ASIC in it.

The point being, if you compare a modernish "professional laptop" with an i7 in it, to an iPad, the iPad will rip it a new orifice.

ARM + ASICs for other functions (ML/security/codec) run rings around any general x86 CPU on performance per watt for sure.

Comparing as you did to a Xeon with 165W TDP is comedy but illustrative yes.


>Some benchmarks if we need them:

>https://browser.geekbench.com/processors/intel-core-i7-8565u

>https://browser.geekbench.com/v5/cpu/2898525

My original question: "but has this claim been backed up by benchmarks that's not geekbench?"

>Now if you do any video processing on the ipad, it'll chunk through 4k like my Ryzen 3700X does if it's HEVC and barely even get warm, but that's the T2 ASIC in it.

>ARM + ASICs for other functions (ML/security/codec) run rings around any general x86 CPU on performance per watt for sure.

Comparing hardware accelerated HEVC performance to software cpu performance isn't fair and is shifting the goalposts. If you're going by that logic you can also say that a snapdragon 865 (hardware rendering) has "considerably more grunt" a 64 core threadripper (software rendering). While we're at it, we can also compare A12 to intel/amd in other aspects, such as memory support, floating point performance, and sustained performance, all of which I'm confident A12 wouldn't do well in.

>Comparing as you did to a Xeon with 165W TDP is comedy but illustrative yes.

So? The xeon cpu also has 28 cores and 56 threads, whereas the A12 only has 2 "big" cores (the ones being benchmarked). Dividing 165 by 28 gets you to a more reasonable TDP of 5.89W per core.


This review[0] has SPEC2006 scores for the A12X in the iPad Pro 2018. The same review estimates the per core power at around ~4W.

This review[1] compares the SPEC2006 score of the A13 to high end modern desktop chips. The A13 is very close despite its power consumption being much lower.

[0]: https://www.anandtech.com/show/13661/the-2018-apple-ipad-pro...

[1]: https://www.anandtech.com/show/14892/the-apple-iphone-11-pro...


Your iPad doesn't have a T2 in it. That's the name for the ARM coprocessor inside of newer Macs (2018+), which IIRC shares some similarities with the A10, including (probably) the media block.

The 3700X contains its own encoding blocks which would make a better comparison.


AFAIK only Ryzen APUs have hardware accelerated encoding.


ah, my bad. I was thinking of the 3700U.


No its a ridiculous claim. Not to mention that "superior" performance lasts a few seconds before throttling.


i7 from when?


I'm comparing to mobile U-series in my (explicitly stated) laptop which is a T470. So for me an i7-7600U. But if you throw a T490 on the table with an i7-8565U it'll blow holes in that as well. As well as I'm sure nearly all mobile CPUs.

I'm sure the A14 or whatever comes along, with higher core count and better thermals for desktop use, will do the same for desktop CPUs.



I mean, an i7-8565U is old too, and was released before they could get the spectre/meltdown fixes in hardware.

An i7-10510U (which is the closest thing I could find, an upper mid range ultrabook part) is significantly faster than an A12Z.


Not just from when, but type of processor as well. U? H? HQ?


I’ve always found it amazing Intel didn’t make their own OS eventually software companies will try to to eat you.


Hmm, this was posted 1 hour ago, 49 points, 84 comments, and it already moved down to 31st slot, off of front page (from somewhere in the top 15 just 5-10 minutes ago). That seems surprisingly fast.

I noticed because I wanted to come back to read more comments and had a hard time finding it.


Comments outnumbering votes is one of the heuristics used to determine if a thread is likely to be toxic rather than promoting good discourse, so that may be what dampened this off the front page.


I see, thanks for an explanation.


Strange how Microsoft apparently has no way of providing a Rosetta like solution for Windows on ARM.


Interesting... Java was the promise of program once run every where, and i guess that is somewhat true, but what got their faster and better is the browser as a platform. For most things the browser doesn't need Intel or Microsoft, not running or hosting it.


What I can't reconcile is NVDA's stock price given the vast majority of their revenue and dominance comes from supplying GPUs to the Wintel industry. What is their position in a world where Wintel starts to wind down?


How much of the improved efficiency is really ARM vs Intel and how much is that the new Apple silicon will now be TWO cycles ahead of Intel in miniaturisation. That must count for something too.


You can look at AMD CPUs if you want an apples-to-apples comparison, as theirs are on the same TSMC nodes as Apple.


AMD is going to go with an enhanced version of TSMC's 7nm process for their next round of CPUs.

Apple will be using TSMC's new 5nm process for the first version of it's laptop class chips, so they will have a process node advantage in the upcoming generation.


I guess we could compare AMD's new chips with Apple's old ones and see how that looks.


The A13 in Apple's iPhone is on the same TSMC process node as AMD's current chiplets and compares quite favorably despite a greatly reduced power budget and a lack of active cooling.

>Last year I’ve noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. In SPECfp2006 the A13 is still roughly 15% behind.

https://www.anandtech.com/show/14892/the-apple-iphone-11-pro...


The Apple core runs at 5W. Consider that AMD's chips run at significantly higher clock rates. The power budget isn't greatly reduced. It's basically the same. Apple has little room for improvement. They have to stay below 2.4Ghz to keep the same power budget.

Just read the article you linked.

>Apple’s marketing materials describe the A13 as being 20% faster along with also stating that it uses 30% less power than the A12, which unfortunately is phrased in a deceiving (or at least unclear) manner. While we suspect that a lot of people will interpret it to mean that A13 is 20% faster while simultaneously using 30% less power, it’s actually either one or the other.


I did read it. That 5 watt figure is with one of the two big cores fully loaded. AMD idles with a bigger power draw than that.

>In the face-off against a Cortex-A55 implementation such as on the Snapdragon 855, the new Thunder cores represent a 2.5-3x performance lead while at the same time using less than half the energy.

https://www.anandtech.com/show/14892/the-apple-iphone-11-pro...

When compared to other designs that operate under the same cooling and power constraints, there is absolutely no competition.


The point is not that Intel can't do it, it's that Intel hasn't done it.


It kinda seems like they also can’t.


Until ARM can give processing power comparable to x86-64 power limits be dammed. x86-64 will continue to rule the desktop and large parts of the server market.


The last time I played around with Visual Studio, I vaguely remember some options for compiling for ARM.

No surprises if Windows eventually moves to ARM. (And if servers do too.)


Microsoft have released at least two separate generations of ARM devices running desktop Windows (Surface with Windows RT back in 2012, and Surface Pro X more recently), so they're definitely up for it, even if the market hasn't been yet.


Microsoft's Surface device runs Windows on ARM, and has for a while.


Note that devices with Windows on Arm (64-bit) shipped from third-party OEMs first, with the Asus NovaGo being available in late 2017 for example.


Windows is already running on ARM and in an actual production model that's been on the market for a while. Its just that Apple does a lot better with marketing and makes a much bigger deal of these things. Microsoft on the other hand has always been really bad at marketing.


Windows for ARM can't emulate Intel x64 software. Nor will games using OpenGL versions above 1.1, among other things.

I disagree that Apple is only better at marketing things like this, they're much also better at doing it. And their processors are presumably going to be ahead of anyone else's ARM based computers, just like they've been with phones.

https://docs.microsoft.com/en-us/surface/surface-pro-arm-app...


> Windows for ARM can't emulate Intel x64 software.

This was confirmed coming soon, and will almost certainly be available by the time the new Apple Silicon devices are released.

> Nor will games using OpenGL versions above 1.1, among other things.

OpenGL is just a userspace thing that gets loaded into your address space, and the specific installed dll (ICD) will convert it into hardware ioctls. Microsoft only ever implemented GL 1.1 (as a DX wrapper) and newer versions were always provided by AMD/NVIDIA/Mesa as an ICD.

If the vendor drivers didn't implement anything newer than 1.1 you can still do it yourself. What accelerated interface is available? ANGLE will give you 3.0/3.1 over DX9/DX11 (ships with Chrome and some Windows features); Zink will give you OpenGL 3.0 over Vulkan; the old QindieGL will give you GL 1.4 over DX9. And of course Mesa llvmpipe will give you 4.2 in software.


I didn't compare the two options just saying that apple makes a much bigger splash with their announcements than Microsoft. A lot of people here haven't noticed that a Windows version running on ARM but all know about Apple switching to ARM.


I guess what I really mean to highlight is that it's not just a question of Microsoft being bad at marketing - it's more like they don't have a mass-marketable product.

"Some of your software will work on this" is a throwback to Windows XP 64-bit edition, people don't want to deal with that.

The limited emulation support means it's more capable than Windows RT's "native ARM software only" was, sure. But it's even less straightforward to explain to a potential user whether or not they could use it.


Because no one uses Windows for ARM, so there is no inTeresa in talking about it. Apple Silicon will be used by millions within months of release.


That is also known as good marketing.


> Its just that Apple does a lot better with marketing and makes a much bigger deal of these things. Microsoft on the other hand has always been really bad at marketing.

It's more complicated than that: It's all about understanding disruptive technology. Arm is disrupting x86 / x64.

The thing with disruptive technology is that being ahead of the curve is just as bad as being behind the curve. Microsoft has no benefit from "pushing" Arm, all they need to do is make sure that Windows is ready to run on Arm when someone's ready to sell an Arm desktop computer.

But, Apple, as a hardware company, needs to be much more proactive about where the market is going and actively switch its hardware. If this means that Microsoft's first major Arm market for Windows are VMs on Apple, so be it. The rest of the PC makers will switch or be disrupted.


A big factor is that Apple’s ARM SoCs in their phones/tablets are impressively performant and power efficient, and Apple have shown a consistent pattern of 20-50% year over year improvements in the A chips over last 5+ years.

If Apple’s latest ARM CPU in 2020 was the A7 instead of A13, ARM chips in MacBooks would’ve been equally uninteresting.


Cool, I really hope this means people will stop saying "Wintel" now, especially in places like job descriptions (where it just looks stupid.)


Yes, and maybe we can start calling Apple computers PCs (Personal Computers), which is what they are. At least they seem more personal than my multiuser Linux desktop system.


A few commentators have noted that it’s unfortunate we renamed microcomputers “personal computers” because clearly smartphones are much more personal.


Personal except for that fact that you really own your smart phone in name only. Where you owned a 1980's personal computer lock stock and barrel.


But now “Wintel” becomes a descriptive term that isn't synonymous with “Windows”, and therefore more, rather than less, useful.


I'm a bit surprised at the use of the term "Apple silicon". Everybody seems to have started to use it. As best as I can tell, this is a term invented by Apple's marketing department. I believe it's meant to imply that Apple is producing some magic that goes beyond the CPU.

Readers of this site will of course understand exactly what is being discussed, but I'm concerned that the overuse of this marketing term confuses things for non-technical readers. There are already more common, and more accurate terms that can be used instead.


For CPUID, it makes sense.

   GenuineIntel
   AuthenticAMD
   AppleSilicon


The downvotes sure came quickly on my original post.

My point was that we don't refer to AMD CPU's as "Authentic AMD" in general speech. However, in Apple's case they have succeeded in promoting their own narrative.

I understand that this happens, but I was just intrigued by the fact that they managed to do it. The difference in terms does have a subtle effect in conversation however.


You're not going to run android or regular linux on your iPhone so why bother advertising it as ARM?


I have been using an iPad for over 10 years, am a huge fan.

But it’s disingenuous to compare benchmarks for single threaded performance between the ipad and macbook.

I can’t do any ‘very’ serious productivity work on the ipad that requires multiple apps, period. (try moving a photo between multiple photo-editing apps, ha.)

I hope that is just a ‘feature’ of iOS, or else macOS will suffer greatly as people leave it for more productivity based OS’s.


This is not at all disingenuous. The fact that the environment in an iPad is restrictive versus a general-purpose computer is not particularly related to the performance of the processor powering each. There is no reason to think that (when the environment allows it) the performance of more “serious” apps will not be equivalent.


On a related note, why aren't Qualcomm arm chips performing as well on benchmarks ?



Why isn't my 386 performing as well on benchmarks?


I'd stick with AMD and Linux.


i feel like i saw this movie before. ah yes, PowerPc.

Steve Jobs reveal transition to Intel: https://www.youtube.com/watch?v=ghdTqnYnFyg

i don't know if the history will repeat itself but it will be interesting to watch.


Indeed, it will be interesting to watch. Last time they had Steve to run the show.

If the last few years are to be a guide, it will crash and burn. Apple hasn't been developer and power user friendly for some time now. Focusing on thinner laptops while sacrificing the keyboard and the touchbar to name two. I've been hearing from developers being skeptic who are planning to buy before they switch to Arm. Which is a different vibe when they switched to intel.

It could be a massive marketing stunt from MS when they announce a more like MacOS version of Windows which actually runs on Intel.


Steve wasn’t there for the 68K to PPC transition.


Hey, developer here. I'm looking forward to the ARM MacBook's for the longer battery life...


One thing people prophetizing about Apple ARM transition changing the industry tend to forget about: Apple is not a leader, but actually a latecomer in the ARM space.

macOS is the last major OS getting ARM support. iOS and Android have been built with ARM support from the very beginning. The first commits working on ARM support for Linux have been made more than a decade ago. The first publicly released Windows on ARM hardware came out in 2012, and MS has undoubtedly have been working on it for years before the commercial release.

If anything would trigger an ARM transition on a massive scale, it would likely be the work invested in the high performance ARM hardware by the companies such as Amazon and Fujitsu.


Besides the various mentions of iOS being on ARM, it's also worth mentioning iOS is a fork of macOS (née Mac OS X), and was explicitly branded as the same OS when the iPhone launched. In other words, macOS has been capable of running on ARM—at least internally at Apple—for 13 years.

It's highly likely that Apple has maintained internal macOS ARM builds for every successive release since, if not since before the Intel transition. They had similarly maintained x86 builds of macOS from the time they began its development after the NeXT acquisition.


ARM was initially created as a joint venture between Apple and Acorn in the late 80s....


Linux has supported ARM since the late 90s, actually.

edit, source: https://github.com/mpe/linux-fullhistory/commit/3c2a0eb7ba89... ("initial ARM support", Linux 2.1.80, 20.1.1998)


Apple first shipped an ARM device in 1993, the Newton.

https://en.wikipedia.org/wiki/Apple_Newton


Knowing how Apple had OS X running on Intel for years before they announced the switch, it wouldn’t surprise me if macOS has been running on ARM since Apple started making 64-bit chips all the way back to the A8.


Uh, you know that Apple makes iOS too, right?


As far as I know, the first commits working on ARM support for Linux were in 1994, which is quite a lot more than a decade ago.

Debian has had an ARM port since 2000.


Apple was the first to release a mainstream computer device on the arm64 platform with its chips on the iPhone5s, and I believe this took the rest of the computing world by surprise, perhaps 1-2 years ahead of anyone else.


MacOS is built on Darwin, just like iOS.


Windows has and will run on x86/ARM Linux has and will run on x86/ARM MacOS has run on x86, and will run on ARM.

People have a choice with the first two, not the third. This is why Apple going forward with this is a big deal.


iOS is Apple, too, you know.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: