> The only pre-RISC legacy ISA in use is x86, and it is only losing market share.
And for many generations now, x86 machines are basically RISC processors with a CISC frontend.
Empirically it seems that CISC has 'failed' as a way to design processors, and it's better to let the compiler do that job when you're building a general purpose computer.
That is a meme. Even RISC processors use uops. uops are often wider and more complex than the ISA instructions that they are derived from.
The reason for that is that a lot of features in a CPU instruction are just the result of toggling some part of the CPU on or off so having four one bit flags is better than encoding the same value in two bits. What this means is that you can have more possible instructions available on the uop layer than on the ISA layer. When that is the case you can hardly call the internal design a "RISC" processor. Especially when the ISA wars were specifically about ISAs and not microarchitecture. Even if we say that uops are RISC instructions that still is an argument against RISC ISAs because why bother with RISC as an external interface if you can just emulate it? Your comment seems rather one sided.
Also the x86 was also the most riscy-ish of its cohort (ld/st architecture, single simple addressing mode etc) whether that was uncanny design looking forward decades, a result of Intel's internal architectural history, pure luck people still argue
Other designs from the same time followed on from the wunderchild of the time - the Vax which everyone loved and wanted to emulate.
The big change of the time though was changes in memory hierarchy, caches pushed closer to CPUs (eventually got pulled on-die) which favoured less densely encoded ISAs more registers and instruction sets that didn't require a full memory access for every instruction.
In my professional life time we've gone from 'big' mainframes with 1MHz core cycle times (memory cost more than $1M/megabyte) to those Vax's (actual silicon dram!), to what's sitting on my lap at the moment (5Ghz 8/16 cores 64Gb dram etc).
I don't think CISC 'failed', it was simply a child of it's time and the constraints changed as we moved things from giant wirewrapped mainframes with microcode to minimise ifetch bandwidth, to LSI minicomputers to VLSI SOCs with multimegabyte on-chip caches
Well, to be fair, this was easy to predict. RISK was only created because CISC did already fail at that time; people were already letting their compilers do the job and left the specialized instruction basically unused.
In hindsight, sure, it seems obvious, but I don't think it was that obvious that CISC performance wouldn't end up scaling. What wasn't obvious, to me anyway, is that even with all of that complicated decode, register renaming and whatnot, these CISC processors managed to stay competitive for so long. Maybe it's just the force of inertia, though, and if there'd been a serious investment in high-performance RISC machines in the 2000s, x86 would've been left in the dust.
Oh, I'd say it is still not really obvious. Sure, RISC has a complete win now, but there is no reason to thing that the ultimate architecture for computers won't be based on complex instructions.
We even had some false starts: when vector operations were first created, they were quite complex, then they were simplified into better reusable components; also when encryption operators started to appear, they were very complex, then they broken into much more flexible primitives. There is nothing really saying that we will be able to break down all kinds of operations forever.
But still, I wouldn't bet on any CISC architecture on this decade.
Anyway, the reason x86 lasted for so long was Moore's law. This is patently clear on retrospect, and obvious enough that a lot of people called it forward since the 90's. Well, Moore's law is gone now, and we are watching the consequences.
Won't CISC architectures always have the benefit of being able to have dedicated silicon for those complex instructions and thus do them faster than many smaller instructions? I understand RISC-V does instruction fusing, which provides a lot of the same benefits, but I'm surprised ARM gets around this.
Dedicated silicon for custom instructions is now quite favored actually, because of the well known "dark silicon" problem. I.e. most of the die actually has to be powered down at any given time, to stay within power limits. Hence why RISC-V is designed to make custom, even complex instructions very easy to add. ("Complex" is actually good for a pure custom accelerator, because it means a lot of otherwise onerous dispatch work can be pulled out of the binary code layer and wired directly into the silicon. The problem with old CISC designs is that those instructions didn't find enough use, and were often microcoded anyway so trying to use them meant the processor still had to do that costly dispatch.)
And for many generations now, x86 machines are basically RISC processors with a CISC frontend.
Empirically it seems that CISC has 'failed' as a way to design processors, and it's better to let the compiler do that job when you're building a general purpose computer.