Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is a risc still a risc when you have complex instructions like this?


The best answer to any RISC vs CISC question is the analysis John Mashey posted to Usenet comp.arch https://www.yarchive.net/comp/risc_definition.html (Mashey was a CPU designer for MIPS.)

In the analysis he counts instruction set features like number of registers, number of addressing modes, number of memory accessed per instruction. He compares over a dozen architectures.

ARM comes out as the least RISCy RISC, but definitely on that side of the line, and x86 as the least CISCy CISC. (This was before amd64.)


I think the notion of CISC and RISC are practically meaningless in 2020.


Yeah. RISC kinda became a shorthand for "fixed instruction length, load-store" and CISC became a synonym for "x86/amd64".


It's for sure a hybrid given that they were microcoded on early ARM cores. But a really, really useful half way point given that those early ARM cores lacked caches unlike prototypical RISC chips and these instructions would other wise be competing with the memory transfers themselves if they didn't maximize density to a single aligned instruction.


Maybe not, but it's still at least an order of magnitude fewer instructions than x86 :)


It is my understanding that it is "reduced-instruction set computer" rather than "reduced instruction-set computer".

That is the reduced means each instruction does less, rather than specifying the number of them.


Wikipedia[1] both corroborates and disagrees;

> A RISC is a computer with a small, highly optimized set of instructions

but later:

> The term "reduced" in that phrase was intended to describe the fact that the amount of work any single instruction accomplishes is reduced—at most a single data memory cycle—compared to the "complex instructions" of CISC CPUs that may require dozens of data memory cycles in order to execute a single instruction

1. https://en.wikipedia.org/wiki/Reduced_instruction_set_comput...


From this[1] piece it seems the the original goal was indeed both:

> Cocke and his team reduced the size of the instruction set, eliminating certain instructions that were seldom used. "We knew we wanted a computer with a simple architecture and a set of simple instructions that could be executed in a single machine cycle—making the resulting machine significantly more efficient than possible with other, more complex computer designs," recalled Cocke in 1987.

[1]: https://www.ibm.com/ibm/history/ibm100/us/en/icons/risc/


IIRC the idea is that the judge of the complexity is ultimately how direct it maps to the underlying implementation. For example VLWI machines follow the same principles but with the focus around super scalars, i.e. they favour explicit parallelism defined in the instruction stream as opposed to dynamic circuitry implementing instruction reordering and dependency tracking




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: