As far as I've heard, we haven't hit that limit in the consumer space yet, as we're just starting to get close to hitting it in other spaces. Is this not true?
We have hit it in the consumer space. A powerful processor from three years ago is still a decent processor today. That was not true in, say, 2000. This applies in consumer computers and high performance computers.
For a long time, processor architects were able to increase the frequency, increase the cache size, and increase the pipeline depth (to allow for more instructions in flight at the same time) to yield more powerful processors. As Moore predicted, processor architects kept getting more transistors to play with, and they were able to make processors more powerful by making designs that were "like the old one, just more so." But they've hit fundamental limits in the design: increasing the clock speed and the pipeline depth at the same time means that you have to communicate the same amount of data over a longer distance in a shorter amount of time in the silicon. We've hit the point where it's not feasible to do that anymore.
Hence, multicore. Processor architects are still getting more and more transistors to play with, so instead of using them to make a single core more powerful (as they did for a long time), they're using them to make multiple cores. But you no longer get the "for free" performance boosts that you did when you increased single-core performance. Now we need to change how we make software to take advantage of this new hardware. And some software can't take advantage of this new hardware.