Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is the nature of progress: progress is doing more with less. That’s increased productivity.

Of course software now uses more computing resources, so that’s not doing more with less. But the computer is cheap. What’s expensive is the humans who program the computer. Their time is expensive, and getting experienced, expert humans is even more expensive.

So we now have websites that have rich features bolted together using frameworks. Same for desktop software, embedded systems, and whatever else. They’re optimized for developer time and features, not for load time because that’s not expensive, at least not in comparison.

As a user the only solution I see to this is to use old fashioned dumb products rather than cheaply developed “smart” ones. For instance I’m not going near a smart light switch, or a smart lawn sprinkler controller. Old dumb ones are cheap and easy and fast and predictable.



>But the computer is cheap. What’s expensive is the humans who program the computer.

This is a nice half-truth we tell ourselves, but that's not the full story.

There exists plenty of optimizations where the programmer-time would be smaller than the additional hardware cost. And those losses compound. But they're a little too hard to track, and cause is a little too far divorced from effect.

I did our first ever perf pass on an embedded application as we started getting resource constrained. I knocked 25% off the top in a week. Even if I had spent man-months for a 10% savings, try and tell me that's more expensive than spinning new boards.

That's not to say we're opposed to hardware changes; we do them all the time. But the cost curve is weighted towards the front so it's more attractive to spend a non-zero amount of developer time right now to investigate if this other looming spend is avoidable. That's not the case when you're looking at controlling the acceleration of an AWS bill that spreads your spend out month to month through eternity.

Who wants to spend a big chunk of money up front to figure out if you can change that spend rate trend by a tiny percentage? Even if you do, and get a perf gain, but someone else on the team ships a perf loss? Then it doesn't feel real, and you can only see what you spent. Even if you have good data about the effect of both changes (which you don't), the fact the gain was offset means the sense of effect is diminished.

And rather than investigate perf, people can always lie to themselves that the cost is all about needing to "scale." That way they convince themselves not only was there nothing they could have done, the cost is a sign that their company is hot shit.

If you don't think that kind of perf difference is real, Look at Maciej's comparison of pinboard and anonymized-del.icio.us https://idlewords.com/talks/website_obesity.htm (ctrl-f "ACME")

And if perf has any impact on sales, cause and effect are even further apart. You might be able to measure the effect perf has on your sales website directly, but if that feedback loop involves a user developing an opinion over days/weeks? Forget about knowing. Oh, sure they'll complain, but there are no metrics, so we get the rationalizations we see in this thread.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: