Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of my major beefs with IT conversations is the stubborn adherence to the manufacturing notion of 'economies of scale'.

That's not how algorithms work. Most of the things we do are nlogn complexity. There are lots of breakpoints where 10x as many users cost you 11x as much in hardware, and we pretend it's going to cost us 8x. Any time your estimates are off by 40% should trigger your root cause analysis protocol.

In particular we are lousy at statistics. We have to be right in the middle of the consequences of misreading them before we act. When you have 10 servers you might have to deal with a deployment race condition once a month. When you have 30 servers you might have to deal with one every week. That starts to materially affect your estimates when you have five of those sorts of things going on.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: