Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

tl;dr Mini rant (pro dedicated, anti-cloud)

Amazon has done an amazing job of convincing people that their hosting choice is between cloud (aka, AWS) or the higher-risk, knowledge intensive, self-hosting (aka, colocation). You see this play out all the time in HN comments. CTOs make expensive and expansive decisions believing these are the only two options. AWS has been so good at this, that for CEOs and some younger devops and developers, it isn't even a binary choice anymore, there's only cloud.

Do yourself, your career, and your employer a favor, and at least be aware of a few things.

First, there are various types of hosting, each with their own risk and costs, strength and weaknesses. The option that cloud vendors don't want you to know about are dedicated servers (which Hetzner is a major provider of). Like cloud vendors, dedicated server vendors are responsible for the hardware and the network. (If you go deeper than say, EC2, then I'll admit cloud vendors do take more of the responsibility (e.g. failing over your database)).

Second, there isn't nearly enough public information to tell for sure, but cloud plays a relatively minor role in world-wide server hosting. Relative to other players, AWS _is_ big (biggest? not sure). But relative to the entire industry? Low single-digit %, if that. The industry is fragmented, there are thousands of players, offering different solutions at different scales.

For general purpose computing/servers, cloud has two serious drawbacks: price and performance. When people mention that cloud has a lower TCO, they're almost always comparing it to colocation and ignoring (or aren't aware of) the other options.

Performance is tricky because it overlaps with scalability. But the raw performance of an indivisible task matters a lot. If you can do something in 1ms on option A and 100ms on option B, but B can scale better (but possibly not linearly), your default should not be option B (especially if option A is also cheaper).

The only place I've seen cloud servers be a clear win is GPUs.



Based on my two decade experience in the field ranging being part of the team that was building super computers to one that was moving amazon.com to aws, there are many dimensions that you need to consider for computational workloads.

The primary deciding factor is always security. You simply cannot use any small vendor because of the physical security (or the lack thereof). Unless of course you do not care about security. If a red team can just waltz into you DC and connect directly to your infra is it game over for some businesses. You can easily do this with most vendors.

The secondary deciding factor is networking. Most traditional co-los have very limited understanding of networking. A CCIE or two can make a real difference. Unfortunately those guys usually work some bigger companies.

The third deciding factor air conditioning and electricity considerations. Worst case you are facing an OVH situation. https://www.datacenterdynamics.com/en/opinions/ovhclouds-dat....

(It is really funny, because I have warned them that their AC/cooling solution is not sufficient, and they explained to me that I am wrong. I was not aware of the rest (wooden elements, electricity fuckups, etc.)

"""During the year, an article in VO News by Clever Technologies claimed there were flaws in the power design of the site, for instance that the neighboring SBG4 facility was not independent, drawing power from the same circuit as SBG2. It's clear that the site had multiple generations, and among its work after the fire, OVHcloud reported digging a new power connection between the facilities.""")

The fourth would be probably pricing. TCO is one consideration, after you made sure that the minimum requirements are met, but only after.

So based on the needs somebody can choose wisely, based on the ___business requirements___. For example, running an airline vs running a complex simulations have very different requirements.


AWS and GC (and I assume Azure, but I haven't looked) have definitely set the standard with respect to checking off all the boxes when it comes to helping customers with security audits and requirements. This is a place other provides have serious lagged. I've been involved in cases where using the cloud is essentially a pass and not using the cloud raises red flags.

From a sales point of view, I agree with you that, for a lot of folks, this might be the main concern. If you're doing B2B or government work this might be, by far, the most important thing to you.

However, this is at least partially pure sales and security theatre. It's about checkboxes and being able to say "we use AWS" and having everyone else just nod their head and say "they use AWS."

I'm not a security expert (though I have held security-related/focused programming roles), but as strong as AWS is with respect to paper security, in practice, the foundation of cloud (i.e. sharing resources), seems like a dealbreaker to me (especially in a rowhammer/spectre world). Not to mention the access AWS/Amazon themselves have and the complexity of cloud-hosted system (and how easy it is to misconfigure them (1)).) About 8 years ago, when I worked at a large international bank, that was certainly how cloud was seen. I'm not sure if that's changed. Of course, they owned their own (small) DCs.

(1) - https://news.ycombinator.com/item?id=26154038 The tool was removed from github (conspiracy theory!), but I still find the discussion there relevant.


> The primary deciding factor is always security

so, anywhere where your workloads or data are physically co-located on the same hardware as someone else's should be automatically disqualified, right?


No. It is only a risk if an attacker can use it somewhow. Would you show me a scenario how could an attacker gain knowledge of which physical server my lambda function is running AWS and break out from the container and get access to my container? This risk is acceptable for most workloads. Maybe not for the tree letter gov agencies, this is why they got gov cloud. You see, again, what is the use-case? What is the risk? What risk is acceptable?


> Do [...] your career [...] a favor, and at least be aware of a few things. [emphasis mine]

Doing your career a favor is how we ended up in this situation in the first place. The tech industry had way too much free money floating around that there was never any market pressure to operate profitably, so complexity increased to fill the available resources.

This has now gone on long enough that there are now entire careers built around the idea that the cloud is the only way - people that spend all day rewriting YAML/Terraform files, or developers turning every single little feature into a complex, failure-prone distributed system because the laptop-grade CPU their code runs on can't do it synchronously in a reasonable amount of time.

All these people, their managers and decision makers could end up out of a job or face inconvenient consequences if the industry were to call out the bullshit collectively, so it's in everyone's best interest to not call it out. I’m sure there are cloud DevOps people that feel the same way but wouldn’t admit it because it’s more lucrative for them to keep pretending.

This works at multiple levels too, as a startup, you wouldn't be considered "cool" and deserving of VC funding (the aforementioned "free money") if you don't build an engineering playground based on laptop-grade CPU performance rented by the minute at 10x+ markup. You wouldn't be considered a "cool" place to work for either if prospective "engineers" or DevOps people can't use this opportunity to put "cloud" on their CVs and brag about solving self-inflicted problems.

Clueless, non-tech companies are affected too - they got suckered into the whole "cloud" idea, and admitting their mistake would be politically inconvenient (and potentially require firing/retraining/losing some employees), so they'd rather continue and pour more money into the dumpster fire.

A reckoning on the cloud and a return to rationality would actually work out well for everyone, including those who have a reason to use it, as it would force them to lower their prices to compete. But as long as everyone is happy to pay their insane markups, why would they not take the money?


SVB offered a perk of free AWS and Google Cloud credits to new startup clients, so even more "free money" to entrench the startup ecosystem in the cloud.

https://www.svb.com/account/startup-banking-offers


I think the performance argument is huge as there are a lot of factors at play here.

For one, people generally underestimate the performance cost of their choices. And that reaches from app code, to their db and their infrastructure.

We’re talking orders of magnitude of compounding effects. Big constant factors that can dominate the calculation. Big multipliers on top.

Horizontal scaling with all its dollar cost, limitations, complexity, maintenance cost and gotchas becomes a fix on top of something that shouldn’t be a problem in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: