Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't mean to wear out this thread, and I totally respect your different viewpoint, but when I see:

  ... they're trying to tell us something. When we need greater
  computer power, the answer is not "get a bigger computer", it's
 "get another computer".
that does read as dogmatic advice to me, taken in isolation. It boils down to "the answer is X." Not "consider these factors" or "weigh these different options," but just "this is the answer, full stop."

That's dogma, no?

(that aside, I do slightly regret the snarkiness of my initial comment :)



It is extremely rare that your compute workload has scaling properties that need just a little bit faster computer. The vast majority of the time if you are bound by hardware at all, the answer is to scale horizontally.

The only exception is really where you have a bounded task that will never grow in compute time.


Perhaps I misunderstand you, but what about those decades where CPUs were made faster and faster, from a few MHz up to several GHz, before hitting physical manufacturing and power/heat limits?

Was that all just a bunch of wasted effort, and what they should have been doing was build more and more 50MHz chips?

Of course not. There are lots of advantages to scaling up rather than out.

Even today, there are clear advantages to using an "xlarge" instance on AWS rather than a whole bunch of "nano" ones working together.

But all this seems so straightforward that I suspect I really don't understand your point...


>Perhaps I misunderstand you, but what about those decades where CPUs were made faster and faster, from a few MHz up to several GHz, before hitting physical manufacturing and power/heat limits?

If you waited for chips to catch up to your workload, you got smoked by any competitors who parallelized. Waiting even a year to double speed when you could just use two computers was still an eternity.

> Was that all just a bunch of wasted effort, and what they should have been doing was build more and more 50MHz chips?

No, that’s a stupid question and you know it. You set it up as a strawman to attack.

Hardware improvements are amazing and have let us do tons for much cheaper.

However, the ~4ghz CPUs we have now are not meaningfully faster in single thread performance compared to what you could buy literally a decade ago. If you’re sitting around waiting for 32ghz that should only be “3 years away”, you’re dead in the water. All modern improvements are power savings and density of parallel cores, which require you to face what Grace presented all those years ago.

Faster CPUs aren’t coming.

xlarge on AWS is a ton of parallel cores. Not faster.


I just want to make one last attempt to get my point across, since I think you are discussing in good faith, even if I don't like your aggressive timbre.

There is risk in reinforcing a narrow-minded approach that "all we need is more oxen." It limits one's imagination. That's the essence of what I've been advocating against in this thread, though perhaps my attempts and examples have merely chummed your waters. Ironically, I'd say Grace Hopper rather agrees, elsewhere in the linked talk[1].

> Faster CPUs aren’t coming.

Not with that attitude, ya dingus (:

[1] "https://www.youtube.com/watch?v=si9iqF5uTFk&t=1420s

  I think the saddest phrase I ever hear in a computer installation
  is that horrible one "but we've always done it that way." That's a
  forbidden phrase in my office.


Por qué no los dos?

I liked grace hopper's comments as a rebuttal against "only vertical! No horizontal!" but I'd agree that reading that rebuttal dogmatically would be just as bad of a decision.

Bigger is better in terms of height and girth when it comes to capabilities. At any given time, figure out the most cost efficient number of oxen of varying breeds for your workload and redundancy needs and have at it. In another year if you're still travelling the Oregon trail you can reconsider doing the math again and trading in the last batch's oxen for some new ones, repeat as infinitum or as long as you're in business.


You clearly aren’t working in the constraints of computing in reality. The clock speed ceiling has been in place for nearly 20 years now. You haven’t posted anything suggesting alternatives are possible.

Your point has been made and I’m telling you very explicitly that it’s bad. The years of waiting for faster processors have been gone for basically a generation of humans. When you hit the limit of a core, you don’t wait a year for a faster core, you parallelize. The entire GPU boom is exemplary of this.


I agree. And it is interesting too that the ceiling for the faster computer still goes back to her visualization of a nanosecond. Keep cutting that wire smaller and smaller, and there's almost nothing left to cut. But if we want it to go faster, we'd need to keep halving the wires.

Despite the very plain language her talk has a lot of depth to it and I do think how interesting how on the money she was with her thoughts all the way back then.


I think the misunderstanding here (and I apologize where I've contributed to it) is that you think I'm talking specifically and only about CPU clock rates.

The scale-up/scale-out tradeoff applies to many things, both in computing and elsewhere. I was trying to make a larger point.

I guess it's appropriate, in this discussion about logging, that we got into some mixup between the forest and the trees (:




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: