Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Hetzner launches three new dedicated servers (hetzner.com)
380 points by mfiguiere on March 15, 2023 | hide | past | favorite | 312 comments


I've been using Hetzner servers for ~15 years with multiple clients and employers, and always been disappointed with other providers compared to what Hetzner delivers. OVH with their frequent network level outages, the 2021 fire and so on. DigitalOcean with their way too frequent and long lasting maintenance windows. And AWS/GCP/Azure with their obscene pricing, ridiculous SLA and occasional hour-lasting outages. One application platform I managed was migrated from DO to Hetzner with huge cost savings, much better uptime and insanely much higher performance running on bare metal servers rather than cheapo VMs. If you need more than two vCPUs and a few gigs of RAM, I see absolutely no reason to use overpriced AWS/GCP/Azure VMs.


While I like Hetzner a lot and can share your recommendations, I just don't see how it compares to full-blown cloud providers like AWS, GCP or Azure. It's a common misconception to put them at the same level when the offering is completely different.

Nobody seriously uses AWS/GCP/Azure to have a couple VMs or dedicated servers alone. If someone can run their full workload in e.g. Hetzner without much hassle then they shouldn't be using any of the other cloud platforms in the first place as they'd be definitely overpaying.

EDIT: I want to clarify that I unfortunately do know some companies use the big 3 as simple VPS providers but it seems that everybody agree here that it's a waste of money and that's one of my main points, which is also why the comparison of the big ones vs Hetzner or any other standalone VPS/dedicated server provider is pointless as they serve different use cases.


> Nobody seriously uses AWS/GCP/Azure to have a couple VMs or dedicated servers alone.

I think you're seriously underestimating the amount of cloud customers that do a simple lift and shift.


I've worked for at least 3 companies using cloud services that could have hosted what they were doing on a handful of boxes for a fraction of the cost.

(The most egregious was a system peaking at maybe 5 hits a second during the month-end busy period living in multiple pods on a GCP Kubernetes cluster.)


Hosted Kubernetes offerings has to be one of the highest margin products of the big 3. So many clusters spun up doing little to nothing. And the cost... In the org I'm in people spin 1 and 2 worker node clusters all the time. And I appreciate the control plane / worker node model, but it's overkill in so many situations.


Until infrastructure fails, for an enormous number of possible reasons. I've seen it happen over and over.


Just switching from Ruby to Crystal - basically the same syntax - will save you at least 3-4 times the money if not 10x in some cases. Not talking about a good Nginx/OpenResty loadbalancing and utilizing Varnish, Redis etc.


> I think you're seriously underestimating the amount of cloud customers that do a simple lift and shift.

I've done exactly that at a previous startup. Granted, it was 10 years ago, but going from racked infra to AWS ended up being half the cost for what was effectively twice the infra (we built out full geo-redundancy at the same time).


Indeed, and it's not just fools doing "lift and shift". I think a lot of shops do simple "lift and shift" to minimize vendor lock-in.


> Nobody seriously uses AWS/GCP/Azure to have a couple VMs or dedicated servers alone.

Most of my clients do just that - just EC2 on AWS. Ofcourse, my experience may not represent average case, but it is certainly not "nobody". I believe that most do it because AWS/Azure is the "safe option".

Choosing AWS/Azure is the modern version of "Nobody ever gets fired for buying IBM".

--

I just recently tried Hertzner myself and I love the experience for now. I am aware that I am comparing apples and oranges here but; Hertzners UI is just so fast and simple compared to AWS and the pricing is great. Even their invoices are clean and understandable.


> Most of my clients do just that - just EC2 on AWS. Ofcourse, my experience may not represent average case, but it is certainly not "nobody". I believe that most do it because AWS/Azure is the "safe option".

If they're going to do that... why not at least choose Lightsail?


Lightsail lacks any VPC or Security Group controls - these are typically the first things I miss when using plain servers at a VPS provider.


Lots of companies do this even some big ones. Mitigating vendor lock-in is a big reason. Using what’s effectively simple VMs makes it much easier to pick up and go elsewhere.

Not all businesses decide that’s a risk worth mitigating, but some do.


I wonder what percentage of AWS are just EC2 instances that run as a simple VM. I know I’ve never used more than that.


This is how my employer, a large enterprise, uses 'cloud'. They just picked up all the server boxes and virtualized them in AWS. Obviously it costs a lot more now and there's no benefits like flexibility because the configuration is all static.

I know cloud can make sense but not like this.


It's not just companies using the big 3 as simple VPS providers. A lot of applications are also hugely over-engineered for their actual needs, and unnecessarily ties themselves to the proprietary cloud APIs just for the sake of IAC or just for the sake of the simplicity of having the whole infrastructure at one provider. Or for the sake of using Kubernetes, for which I guess 1/10 of use cases are actually appropriate. I guess part of the problem is that using Big Cloud Provider X is the default in a lot of companies, and alternatives are not even being considered when starting out a new project.


> Nobody seriously uses AWS/GCP/Azure to have a couple VMs or dedicated servers alone.

Hmm, anything that doesn't have insanely huge traffic and requirements does, and in those cases the major cloud vendors are still cheap and easy enough for those use cases.

Hetzner seems to fit the "not big enough to get major discounts and support but large enough to have considerable cloud bills" customer and that is fine.


Amazon for example tries to capture these with the Lightsail offering [1] which is a separate product from the typical AWS offering (even though of course it runs on AWS but that doesn't matter in this context). No need to go with "raw" EC2 which would make things more complicated and more expensive if all you want to have is a couple VPS.

[1] https://aws.amazon.com/lightsail/


> Nobody seriously uses AWS/GCP/Azure to have a couple VMs or dedicated servers alone.

Many companies and people do host loads that would be better served on dedicated hardware on EC2 because "cloud".

> Hetzner without much hassle then they shouldn't be using any of the other cloud platforms in the first place as they'd be definitely overpaying.

The ability to provision, de-provision, clone, load balance and manage without talking to people, waiting for hardware or really even having to understand in detail what is going on (yes this is bad, but still... ) is one of the big reasons cloud is popular. Many dedicated hosts have gotten a lot better in this area.


>"Nobody seriously uses AWS/GCP/Azure to have a couple VMs or dedicated servers alone"

It actually does happen. They build some software, deploy it on VM and have said software use cloudy database service that removes a headache of maintaining backups, standby, point in time recovery, secure data at rest.

I have couple of shell scripts that do all of that and use Hetzner but I can imaging some org with enough money to not care about the price for convenience of somebody else taking care of your data.


Time to start charging corporations to use your shell scripts I guess

They already pay for the cloud and someone to manage their cloud stuff I bet they would shell out half that if you offered your scripts.

I think that just shows how nonsensical these cloud providers really are when you can just write some scripts to handle it


>"Time to start charging corporations to use your shell scripts I guess"

Believe me I do ;) I adapt those to particular products I develop for my clients. However not worth my time to bother releasing those in generic form. Suddenly I would have to satisfy bazillion specific constraints and requirements for generic users.


The first time I saw Hetzner's pricing - I assumed it must've been a scam - since it seemed like such an incredible deal, and yet I hadn't heard of a single person that ever used it.

Glad that I'm regularly seeing how awesome this company is lately.


I've been using Hetzner for many years. I've had a couple maintenances on baremetal servers over the last several years. Otherwise, the only down time has been self inflicted.


Hetzner is not on the level of the any you mentioned. It's dirt cheap because latencies and protections against exploits are non-existent, sure it serves good when you don't have such needs but the moment you need any (i.e. DDoS protection and low/stable ms for game servers) - hetzner is out of the window.

- Someone that used to fry lil hetzner servers for fun


Not my experience at all. Hetzner obviously does offer DDoS protection and responds quickly to that kind of issues. I've also had Hetzner techs proactively contact me regarding attacks on our infrastructure (none of which actually took any of our servers down, by the way). For specialized needs, you can even have your own hardware installed next by your servers in the same rack for a relatively small premium.


sir, I'm the former staff member of the infamous webstresser.org, I think I should know what we received millions of dollars for,

I understand that you never received attacks of such 'large' scale but it takes $5 to take a hetzner server down (assuming you don't know how to do it yourself)


Details would be enlightening : ) Was it the servers or the applications running on the servers that were taken down? Running bare metal servers exposed to the public is a fairly obvious footgun, there should be at least one layer of load balancing in front in addition to the provider’s firewalls. I’d argue MOST publicly exposed servers run by amateurs can be taken down for less than $5. Regardless of them being bare metal at Hetzner or some EC2 instance.


If you don't mind answering, what was the nature of your attacks? Was it bandwidth exhaustion or layer-7 CPU exhaustion?


Presumably you can front it with CloudFlare if you need DDoS protection?


Not if you're running game servers; those require plain UDP (sometimes TCP) proxying and CF only offers that on the enterprise plan afaik.


I'm not sure if this is the enterprise plan, but it is designed for proxying game servers and preventing ddos like steam's newer networking:

https://www.cloudflare.com/products/cloudflare-spectrum/


It’s incredibly expensive, looked into it before.


Addition to the replies before mine, even if you could use CF - it's a joke, it's easily bypassable and there's tons of methods to do it (i.e. the most common we used is huge botnets with simply emulated browsers sending tons of req/s, and that's just the 'public' one, there's tons of private ones we used to write ourselves that were much complex but needless as the one I mentioned before worked just fine)


Interesting. can you maybe tell us some anecdotes?


If you mean the anecdotes related to >used to fry lil hetzner servers for fun<

https://krebsonsecurity.com/2018/04/ddos-for-hire-service-we...

should be enough


I agree with your judgement on OVH and the top tier cloud providers. I've never used Hetzner but I've had good experieces with UpCloud, Vultr and Linode/Akamai. These three providers are my defacto goto everytime I need to deploy stuff...


I have been using Kimsufi servers (OVH cheap end) for more than 10 years and have not experienced any major outages (and can't remember even small ones). I still have one dedicated server there. 14eur/mo for i5 750, 16gb ram and 2tb hdd seems quite good to me.


It's like people forget that Leaseweb and Worldstream exist.


Not to mention their default DDoS protection by default. AWS DDoS team costs about $6000 last I checked here in hnews. Of course most corporation chose AWS because of permission management console.


I run a hosting company that has around 100 large dedicated servers at OVH. OVH's website sucks, but everything else is great-- outages are extremely rare in my experience, and their built-in DDoS protection is excellent. Also, OVH's Canada data center has great ping times from the US, whereas Hetzner's locations are in Europe.


We use Hetzner but I also had good experience with Scaleway (another French cloud provider).


Hetzner is German.


Yeah, with "another" I was referring to OVH the parent mentioned.


Have you tried Contabo? That has been my go-to hoster for the past decade.


Not familiar with them, but their pricing seems significantly higher than Hetzner.


Interesting. I initially went with Contabo because they were (are?) much cheaper than their competitors. At last when it comes to VPS.


The biggest issue I’ve had with providers like Digital Ocean is the networking speeds. 1 Gbps is just not enough especially when you need to restore a backup or similar.


Just wish they had more of a US presence. Latency is a killer for me.


They do have two US locations, but only for «cloud» products, no bare metal servers.


- AX52: AMD Ryzen 7 7700 / 64 GB / 2x1 TB NVMe - From 59€ [1]

- EX44: Intel Core i5-13500 / 64 GB / 2x512 GB NVMe - From 44€ [2]

- EX101: Intel Core i9-13900 / 64 GB / 2x1.92 TB NVMe - From 84€ [3]

[1] https://www.hetzner.com/dedicated-rootserver/ax52

[2] https://www.hetzner.com/dedicated-rootserver/ex44

[3] https://www.hetzner.com/dedicated-rootserver/ex101


Big Hetzner fan, but the EX101 does not feel like a good value compared to the AX101 that they've had for a while. Yes, the i9-13900 is faster than the 5950X, but does that justify half the RAM and half the disk?

- EX101: Intel Core i9-13900 / 64 GB / 2x1.92 TB NVMe - From €84

- AX101: AMD Ryzen™ 9 5950X / 128GB / 2x3.84 TB NVMe - From €101


Having half the RAM is caused by the fact that both Raptor Lake and Zen 4 are limited to 64 GB of DDR5-4800, unlike the older CPUs (e.g. 5950X) that used DDR4-3200 memory.

Increasing the memory to 128 GB, i.e. to two DIMMs per channel, drops the memory speed, more severely for AMD (DDR5-3600) than for Intel (DDR5-4400).

Overclocking the memory, like in gaming computers, would be unacceptable in server computers.


Using XMP to deliver the rated memory speeds is considered overclocking by some since at least in the DDR4 days the spec said DDR4 was 2133 only.


I rent one AX101 and it has been extremely good value. The thing is so cheap and fast.

However my first one did often reboot randomly and the support wasn’t very helpful. They told me to just rent another one, which I did. The second one rebooted randomly once in about a year. I guess the first one went on auction and still happily reboots.

Hetzner feels like a hard discount cloud provider. I still prefer them over AWS or Azure for non critical workloads that have a little budget.


I'm currently renting around 40 x AX-line machines from them. The random reboots are a real thing, but not painfully so. I would say I see around 5-10 a year on average across all my boxes, so (very roughly) that's one random reboot per 50-100 machine-months.

I asked them about one of the incidents, and they said that the breaker serving the rack had popped. I would guess that is a fairly common cause of this problem.

Another issue is disk failures. They replace the disk incredibly quickly (<1hr) but unless you are willing to pay for a brand new disk they fit whatever they have in stock. Sometimes that seems to be a unit which is itself close to death, and in another couple of months, guess what happens. Mostly they give you something reasonable so it all works out in the end.

Hetzner are a discount cloud provider. For the money, I'm basically delighted with them. The only other realistic option at a similar price point would be to self-host... and I'm not at all convinced that would be worth the hassle.


About 10 years ago when they introduced the EX40 (I think?) those hard-froze randomly every couple hours to days on Linux. But only for some users. They couldn't track down the issue the first few weeks, I guess that's what you get for being an early adopter. They must have gotten (un)lucky during testing and only had setups that worked.

It was first suspected to be certain brands of RAM, so I requested a RAM-swap which unfortunately didn't help. Then a BIOS update which also didn't help. Then someone figured out that nohz=off on the KCL fixed the problem and I had it running like this successfully for a few years. Long after at least one dist-upgrade I remembered that and removed the option again, and the server still ran stable.

There's no real morale to this story I guess, but at least the support is super responsive, and as the root cause wasn't clear at that point didn't hesitate to swap random stuff if you requested so. Also had a faulty HDD last Sunday in one server and requested a swap, which they did within 20 minutes of me opening the ticket.


Because Hetzner is so cheap, if I end up with a faulty server I just order a new one. However that rarely happens, and mostly with the newer products. For me 98% of the servers have been very stable.

I guess it would be good habit to report the server to hetzner though.


I was one of those users. The issue turned out to be CPU bugs. Turning off C-state in the BIOS resolved those random hard freezes.


> Hetzner feels like a hard discount cloud provider.

Not to discount your experience, but honestly, my experiences with Hetzner support have generally been unexpectedly good. They've been very quick to respond, tend to immediately start on whatever my issue is if I provide enough info in the initial ticket, etc. And unlike OVH, I haven't felt like I needed to call them on the phone to get decent service. Kind of surprising to hear that their solution was just "rent another".

Overall pleasant experience for me, especially given how cheap the servers are. My only real wishes would be dedicated servers in the US or Canada, and possibly something in between their unmetered 1Gbps and metered 10Gbps offerings -- being able to burst a bit higher than gigabit occasionally without paying the €1/TB bandwidth fee would be beautiful.


> being able to burst a bit higher than gigabit occasionally without paying the €1/TB bandwidth fee would be beautiful.

IIRC you get 30tb/month included - so it's not "pay nothing vs pay from first tb" - but I could be wrong - I've not yet had any projects where 10gbps made sense.


> Hetzner feels like a hard discount cloud provider. I still prefer them over AWS or Azure for non critical workloads that have a little budget.

They are a discount provider. In my experience however, this kind of problems are very rare. They pop up now and them. I would just order a new server. In one company I was involved with, hetzner was used from the start and architecture was built around it, and at some point we calculated the costs compared to using AWS or similar. The cost savings were insane.

Hetzner is more hassle, but the question is how much do you are willing to pay to get the hassle removed, and in which way.


I got the same issue as well. They randomly rebooted and still not sure what to track.

Everything is normal such as temperature, cpu load etc. the load is very idle. The server is indeed an auction one though.

Other non auction one has been rock solid.


Hu, weird. We run 15 AX101 instances, 13 for Elasticsearch and 3 for ScyllaDB, and I haven't had one reboot yet. Though even if they did it would not impact the stability of our databases.


Both are an insane amount of compute for the price.


Just for comparison, for that kind of money per month AWS will rent you about 2 cores / 4 threads of AMD with 16 GB of RAM.


That's just the compute though, not including bandwidth or disks. These come with effectively the equivalent of ~$3k worth of AWS data transfer and vastly more SSD storage.


> just the compute though, not including bandwidth or disks

But including a platform for access configuration, monitoring, deployment, automatic replacement, and many other things. AWS as a whole is not really comparable to getting servers from hetzner. (Unless that's all you want from them, but then you're overpaying for lots of stuff you don't use)


Had the AX101 for a long time, awesome box! I ended up not using it enough to justify the cost though.


I really wish I could get dedicated hardware for somewhere close to that price here in Asia.

It seems like Hetzner is the only company in the world offering these kind of prices, right? What's the catch?


The biggest catch is that you're getting more desktop-grade hardware than server-grade (notice Intel's i series instead of Xeon, non-ECC RAM). Doesn't make a lot of difference for the vast majority of use cases, but something to keep in mind.

You can get server-grade hardware from them, but then the pricing difference isn't so significant when compared to other providers.


If you go with their AMD servers, you'll get ECC RAM which would be the main sticking point for me. ECC memory will be €5.50 extra on their €37 Ryzen 5 3600 (6 core, 12 thread) 64GB RAM server. Even at €43 with EEC RAM, it's still a great deal. It even comes with 2x512 GB NVMe storage.

Yes, non-ECC RAM is an issue, but that's easily upgraded on their AMD servers.

For €63 you'll get a Ryzen 7 7700 (Zen 4, 8 core, 16 thread) box with 64GB ECC RAM and 2x1TB NVMe SSDs. Google Cloud's N2D-Standard-16 with 8 cores (16 vCPU threads, Zen 2 or Zen 3), 64GB ECC RAM, and no storage costs $550/mo. Yes, it's may not be a perfect comparison, but it's also 8x the price - oh, and Google will charge you $0.085/GB for bandwidth that Hetzner throws in for free. Even Google's Spot Pricing is more than double the cost.

I do agree that non-ECC RAM is an issue, but if you're willing to go with AMD servers, it becomes a very cheap issue to fix.


i really wonder how they are housing the desktop grade hardware. im so used to rack and stack servers (1U/2U), but do you really need that big of a chassis for a desktop cpu, a couple ram dimm's and some ssd? what are you're thoughts?


They have ATX cases on shelves. You can take a look at this tour of their datacenter [1] for more insight. ATX cases are visible around the 3 minute mark.

--

[1] https://www.youtube.com/watch?v=5eo8nz_niiM


SoYouStart (which is OVH) have a Singapore region, they also have an "Asia" region, but I don't know where precisely that is located - https://www.soyoustart.com/asia/

We use a mix of SYS and Hetzner here and have found them both to be excellent and very comparable.


The catch is they lack enterprise features. No PCI DSS, not more than 25(?!) Servers per Network etc. Sure you can workaround these limitations, but thats the catch.


Perhaps enterprises who have those requirements are not the target customer base?


I think what Hetzner have done is just specialise in doing one thing really really well and that is their product, they run servers. They don’t offer any of the „cloudy“ vendor lock in things like functions, DBaaS, blah blah but if you want to run a server (VM or BM) they have quite a solid offering. I really like them and use their products in lots of my projects.


They do some cloudy stuff, https://www.hetzner.com/cloud, but admittedly it is the basics. Not even a Digital Ocean, let alone an AWS.


Indeed they do but if you look at the product range they offer its still in the "we help you run your server" territory, not in the "this is a service that eliminates Component X of your architecture". I really like the Hetzner cloud!


Might be faster to ask this here:

ECC memory on the Cloud products? I'd like to assume they're using AMD's CPUs (consumer grade ECC support; as everyone should), ECC RAM, and at least mirrored storage. However I'd really like to see such basic features confirmed.


After many years of using Hetzner dedicated servers, I recently started using their US cloud for a project. So far, extremely happy, and it's very cost effective. Even cheaper than Digital Ocean (which I also use extensively).


> They don’t offer any of the „cloudy“ vendor lock in things like functions, DBaaS

Neither of those are lock-in. Postgres is pretty much the same if you self-manage it, or if you let Scaleway or AWS or OVH manage it for you. Functions can be if in a special format (Lambda), but pretty much everyone has standardised on Containers as a Service (KNative/OpenFaaS).


Maybe also check out contabo, I think they have some well priced VPS options in their Asia DCs


> What's the catch?

To me, there kind of isn't one. I have generally had very good and fast support, even on the auction servers (which are even wilder in terms of pricing than the ones linked -- e.g., I was paying like 40 euros a month for 40TB storage + a modern i7 and 64GB RAM).

The real 'catch' is the more limited offerings; it isn't the kind of one-stop-shop that AWS is where you can rent 8x A100s in a dozen datacenters while having them manage your database and a billion other things.

But if you just need lots of CPU, memory or storage, don't want to pay exorbitant bandwidth fees, and Europe is fine, they are pretty great.

> It seems like Hetzner is the only company in the world offering these kind of priced, right?

OVH is not quite as cheap, generally, but they have lots of inexpensive offerings, especially on their SoYouStart/Kimsufi lines [1], with much more variety in terms of datacenters, including Singapore and Australia, depending on what you need in Asia/APAC -- likely better DDoS mitigation than Hetzner as well.

LeaseWeb can be really cheap as well. Their public pricing on the main website can appear kind of expensive, or at least not Hetzner-tier cheap, but if you're ordering a decent number of servers, they seem to offer great volume discounts.

For example, through a reseller [2], I've got 100TB of their "premium" bandwidth @ 10Gbps, Xeon E-2274G, 64GB RAM, 4x8TB hard drives, and a 1TB NVMe SSD in Amsterdam that I use as a seedbox for like 60 euros.

Another semi-low-cost provider, depending on what you need, in Asia that is worth mentioning is Tempest.

I believe they are owned by Path.net, and they've so they've got better DDoS mitigation than most other providers without costing an arm and a leg; in Tokyo, $140 will get you an E3 1240v2 + 16GB RAM and $200 will get you a Ryzen 3600X + 32GB RAM, both servers are 10Gbps unmetered.

Not a great option for someone who needs a ton of variety in their hardware, but if you need something high-bandwidth with decent specs in Asia, it's not awful.

[1]: Worth noting that, although unmetered, SYS is generally limited to something like 250Mbps speeds, and Kimsufi is 100Mbps. You do get lucky occasionally and sometimes your server magically has uncapped gigabit, but for guaranteed high-bandwidth servers, the main OVH site is the only option.

[2]: I'm using Andy10gbit, who is fine for my needs - e.g., I don't need to reinstall the OS 24/7 or have instant support since it's just used for torrents. It'd be a bad option for a business, though, since I wouldn't want to be relying on some dude on Reddit if something goes horrifically wrong. WalkerServers is another example of one of the ultra-cheap LeaseWeb resellers.


WholesaleInternet offers some cheaper options in the US


And they also killed AX91..


I have been working in the industry since 20 years and i have not a single bad word to loose about Hetzner.

Their Service has always been impeccable and their servers just run.

I have been running k8s clusters on hetzner for quite some time now and the flexibility for the price is exactly what i expect from a hoster!

Now with this addition Hetzner closes another gap that made projects spends thousands more on enterprise clouds. So im not only happy but also proud that they just keep innovating!


Not my experience. Hetzner is really a discount provider, though they may have gotten better.

I used to work for a team that rented dozens of servers from them and we had disk failures almost every other week, which required creating a support ticket and asking them to swap out the drive so we could rebuild the RAID array.

They used regular SATA consumer drives and they were probably pretty old or refurbished or something.


Yeah, they indeed are a discount provider, which means that they're about as reliable as AWS or GCP services (which have constant partial failures).

I've been very happy with Hetzner for some workloads.


The expectation with AWS and GCP is that the constant intermittent failures are shouldn’t be visible to end users. * So filing tickets is a big difference in user requirements.

* Though GCP (back when it was just AppEngine) wasn’t always this way and as GAE users inside Google we had to write our own code for what we expected to fail, retries, backoffs etc.


I still have a few times per month mails from AWS with an "instance retirement", for which then you have to plan accordingly (basically by not having a single instance as a single point of failure). If you do the same with Hetzner, you would not notice the failure either (failed disk on a single machine? just business as usual, it will eventually be replaced).

Obviously replacing AWS features like RDS multi-AZ masters is not going to be as easy and might be worth paying the whole AWS premium, but that really depends on the business size, traffic, internal experience and many other factors.


Hah me too! EC2 instance retirement was so long ago I’d forgotten. :D


Yeah, turns out that if you write code for constant intermittent failures, it'll run well for rare intermittent failures of Hetzner machines too.


With AWS - failure means the instance is automatically retired, and your ASG causes a new instance to automatically be created and put in service without you having to do anything.

With hetzner - failure means your monitoring detected disk failure, sent you a pagerduty alert, which you then have to check the alert, figure out what has failed, and send in a support ticket to get the disk replaced. This will take a couple hours, after which you have to rebuild your RAID array, and hope no more disks fail. All the while operating with degraded performance.

(Don't get me wrong, hetzner is _great_, I've used them for years and highly recommend for numerous scenarios - but the idea that their failure and reliability is anything like "the cloud" is fanciful)


You're, In my real world experience their reliability beats out AWS reliability by a massive margin.

On AWS, something is constantly breaking. One of the 100s of services will always have performance issues, degraded availability or some other crap going on.

On Hetzner, the hard drive, CPU or RAM on one of the machines will die once every few years. Maybe.

(This changes as your service grows and scales out, but there's a stupid high amount of traffic a few machines can take.)


Neither of your anecdotes match my own personal experiences - so I'm sure the general truth is somewhere in between.

I've been responsible for millions of dollars of AWS spend over the last decade. I've had virtually zero AWS caused downtime in that period outside of the few major outages that affected the whole world (for example that major S3 outage) - but the "100s of services will always have performance issues or degraded availability" has literally never been true for me. I've had hundreds of instances be retired - but that is all automated and without downtime.

Over the last 18 months at my current company, we've had 100% uptime - there has not been a single AWS incident that has affected us in us-east-2. And since we're using ECS and fargate, we've also not had to worry about instance retirement.

On the other hand - I've also had numerous personal servers with hetzner over the years - and the hardware is _old_. I've had at least 3 hard drives go bad over the last ~8 years.

Again, I still strongly recommend hetzner for many cases - but I just think it's important to go in understanding the difference in responsibility for things like hardware level monitoring.


That's apples and oranges. With RAID the server is never retired and you don't need to set up auto-scaling and all the scale-out complexity that comes with it. It just keeps running. The replace/resilver cycle may degrade performance whilst the data is re-replicated, but bringing up a new VM will also degrade performance for a while whilst it replicates data from some other node onto itself.


That’s if you use their dedicated servers. They have Hetzner Cloud which does the above as well.


It's up to you what type of harddrive you order in your servers.

So I guess you can blame your own team for ordering consumer sata?


Not sure what time frame we're talking here but I don't even remember there being any choice of disk.


Disk choices have been a thing for a very long time. You can see the options when you scroll down in the configurator. [1] On top of that, Hetzner offers custom builds, so if you have specific needs you can get specific builds. Just not through the point-and-click interface.

--

[1] https://www.hetzner.com/dedicated-rootserver/ax52/configurat...


Sounds like op might have been using the cheapest option without ecc ram, server grade cpus etc?


I heard the devops guys at a previous job wrote a script to detect CPU fan failures. Then got asked to stop sending so many emails about CPU fan failures. Was still worth it because it's cheap, but they're cheap because it's not new and reliable hardware.


Maybe he detected 0 RPM without looking at temperature.... some controllers will just stop fans when there is enough cooling.

We had a bunch of racks and fan failures gotta be one of rarest one. Even on my personal junk I had zero actual failures, just one getting noisy

> Was still worth it because it's cheap, but they're cheap because it's not new and reliable hardware.

Not really different than cloud, they aren't buying top of the line servers, they are making their own just like hetzner, for cheapest per performance unit.


Are controllers that throttle down to zero common in the data centre space though? I don't think I've ever heard a quiet or silent server, except for the ones tech people make themselves at home.


Hetzner is known to do their own thing instead of blindly following datacenter conventions. E.g. having the physical architecture (how their buildings are designed) so optimized for convective flow that they get sufficient air flow without extra fans most hours of the year and only need active "heat pump" style cooling in rare exceptions. They do have board-level fans, but it it would be almost out-of-character if those did not throttle to zero whenever possible.

Big german language report: https://www.golem.de/news/besuch-im-rechenzentrum-so-betreib...


I'm no particular Hetzner fan, but is it possible there were a load of false positives in there?


"Oh look, fan stopped, means it's dead"

CPU temperature: 40C


Really? Been running few dedicated servers for several years straight without a fail.

Even Herzner cloud just works and I don't know how they do it but it's dirt cheap.


How many is a few?

If you've have 500 servers for a long time, and a new script discovers 5% have a failure and sends 25 emails at the same time, I can see why Hetzner might want a single email. Numbers made up, but you get the idea.


Is this any worse than not even knowing when parts of your AWS instance are failing?


If I search my emails: "EC2 has detected degradation of the underlying hardware hosting your Amazon EC2 instance"

It's just something you have to do yourself with Hetzner.


Right but AWS gets to make that choice, you have no idea whether they consider a failing fan as "degradation".


You should compare EC2 with Hetzner Cloud. Hetzner Cloud do send such alerts occasionally.


I have heard complaints about the performance of their network connectivity and quality of their peerings.


> I have been working in the industry since 20 years and i have not a single bad word to loose about Hetzner.

I use them, and I've been very happy with both pricing, reliability and the service.

Only possible bad thing to say about them: their static IP's aren't always "clean": I had a couple of instances where the IP I was allocated was blacklisted and it took some back and forth with their customer service to fix the problem (got a new IP).

But other than that quality / price ratio is way higher than GCP, AWS and their ilk.

I also use OVH, they're pretty decent as well, in the same ballpark as Hetzner.


> their static IP's aren't always "clean": I had a couple of instances where the IP I was allocated was blacklisted and it took some back and forth with their customer service to fix the problem (got a new IP).

Isn't that a problem you're always going to get with $any provider? You never know who previously owned that IP and what they did with it.


im an ovh customer too, mainly for their exchange offering. I disagree that they are on the same level as hetzner as support often doesnt know what their are doing and half their tools are broken.

For instance i was trying to migrate a large exchange group off to office 365 recently and their migration assistant simply has not been updated to support modern auth for office 365 amongst other things.

Also the migration failed from their own accounts for some reason....

As for the dirty ips: yes that happens but its not really hetzners fault, as the ip you had assigned had been taken away from the "bad actor" prior. If you tell that to your support agent, i have gotten a new one without a problem

EDIT: Ironically, not i cant even list my exchange accounts in ovh. it just keeps on loading and loading


You must be really lucky then :) My experience from ~2006 to 2014/15 (on and off, different companies, many different servers) was mostly bad, literal "you get what you pay for", with disks in dedicated servers dying left and right.

I've only become a customer again since their vps cloud offering and I've actually been recommending that because it has been flawless for me for years.


and you must be a customer of the server auctions :D

No but i get it, but i had a lot of failures in general with spinning disks. I think it has to do that SSDs and NVMEs are much better at telling you how much juice they got left in them. I don't neccesairily think its a problem of hetzner alone though, as disks on other hosters have failed too for me.

I also used to maintain a couple of "plain old offices" and Hard disk failure is sadly just all around us, when you are using bare metal.

Another reason for kubernetes!


How did you setup the k8s cluster? Is it running on bare metal or VMs?


both actually! it depends on what kind of workload you are running but for any kind of web application that "just" wants high availability the VMs are more than enough.

They do provide a csi driver for kubernetes for their blockstorage and private networking for both too.

you can even have the masters of VMS and the nodes on bare metal


They had some problems about 10-15 years ago (hardware, network). The fire some time ago wasn’t cool either


Fire was OVH? Or different incident?


Which fire? Are you not confusing with OVH SGB2 fire in 2021?


I also had problems with Hetzner hardware ~10 years ago, as in servers randomly freezing due to CPU bugs. But in the past few years I haven't encountered problems at all. I have two dedicated servers for running CPU-intensive CI/CD workloads and for the past few years they've run smoothly.


Hmmm I think you're right.. the fire was at OVH.. But I recall something in the past 2 or 3 years about dataloss at Hetzner.

Personally I only had some network issues with them.


I heard it was pretty hot (sorry, couldn't resist)


Recently moved some workloads from GCP to a dedicated Hetzner box with 12900k, 128GB ram and 2TB NVME raid. This thing SCREAMS, running e.g Postgres with the same capacity in GCP would probbably cost > $1k/mo. Hetzner takes around 120 EUR. Also the bandwidht is like 10x cheaper with much larger free tier.


> Also the bandwidht is like 10x cheaper with much larger free tier.

Hetzner bare metal have unlimited bandwidth.


The downside is your neighbours also having unlimited bandwidth. We had to rebuild one Citrix environment to a different (identical) box, because the first was in a network segment with terrible network performance.

If you pull a short straw, your box will be sharing bandwidth with a few bittorrent seed boxes or someone’s video CDN node


For anything bandwidth-sensitive that can’t be spread out across multiple nodes I’d shell out for the 10Gbit option which is still reasonably priced and still includes 20TB of free bandwidth.


Dedicated or Cloud?


I think most times the horror stories and complexities of DevOps from the past (which likely paved the path for convenient cloud providers) is what keeps people away from running their own servers.

That being said, I run much smaller projects and servers and have not worked at a scale that really requires heavy workloads that generates thousands of monthly bills at GCP.

So I think most devs being conditioned to start their first projects on the free-tiers of most cloud providers makes it really difficult for them to move to their own servers when they need it.


Absolutely. I'm pretty new to infra and I learned in our big company AWS environment. I've dabbled a bit with home servers and stuff but I have no idea how to manage a bare metal host "properly".


And you will get proper customer support with Hetzner


All the old European VPS/dedicated server providers are cheaper than Big Cloud.


For some use-cases the Hetzner server-auction is also great value and I’ve had good experience with them.

https://www.hetzner.com/sb

For example, I was running some experiments that required lots of RAM. Right now you can get a server with 256GB RAM for €60/month.


The main draw of the auctions for me is that there is no initial setup fee, which makes experimenting a lot cheaper.


Oh wow, this could come in handy


Yep, I got 128gb ram and 2x 1tb samsung nvme for 40 a month.



There's a great YouTube tour of one of their data centres here: https://www.youtube.com/watch?v=5eo8nz_niiM 'Over 200,000 Servers in One Place! Visiting Hetzner in Falkenstein (Germany)'

The channel is well worth a subscribe too.


Another good low cost provider to consider is WholeSaleInternet.net. Their datacenter is located in Kansas City. I've been colocating and renting dedicated servers through them for since 2005.

Servers start at $9 per-month. A comparable example:

Dual Xeons - 36 cores / 72 threads - 128GB memory - dual 1TB nvme - 5 IP's $80 per-month $0 setup. Setup with dual 2Tb nvme is $100 per-month.

I'm colocating a couple of servers there for $40 per-month each, bandwith is 1Gbit unmetered and comes with 5ip's. A couple 1U's and towers. I recently bought a used 1U server off Amazon for $400. It has 48 cores, 96 GB memory and 4x1TB drives and came with a one year warranty on the components.

Hetzner was solid, but their network was sketchy at times.


> Dual Xeons - 36 cores / 72 threads - 128GB memory

just clicked, unfortunately it is out of stock..

> I'm colocating a couple of servers there for $40 per-month each

are you living nearby? Or you sent them server and they installed it?


I live in NJ, so I usually preconfigured my equipment and ship it. I've also sent them bare servers and had them connect a KVM for me to do the setup remotely. They have common replacement parts on hand, I've bought some memory sticks from them and had them swap.

You can check back, they update the list at server availability changes. Other providers there are Dedispec and Joesdatacenter, may have something in stock you're looking for.


I checked with them and they said they only do full rack colo and perhaps you are going through a reseller. Is that the case and if so can you provide details? I'm interested.


I'm currently doing individual COLO (I don't have a rack atm). But I haven't added any space in three years. I've replaced some older equipment, but that's about it. I've been talking with another customer about splitting a rack ($200 each) as I'm spending spending $270 for 6 COLO servers ($240 for the space and $5x6 for additional IP space per-server).

joesdatacenter.com (Kansas City) has single server COLO for $50 a month.


Does anyone know of a managed postgres service that happens to use Hetzner? I'm running my own PG instances on Hetzner, but getting tired of managing it myself. Going with a 3rd party makes me worried about latency (currently it's like 1-4ms...I'd rather not increase that to 50ms for a service located somewhere else).

Haven't found anything by Googling, so was wondering if anyone here works somewhere that does this.


Maybe crunchy data? (https://www.crunchydata.com/products/crunchy-high-availabili...) haven't tried it myself, it sounds like a very nice service. Maybe someone here has experience.


I've never used them but asked them once about pricing. For my use case they were really expensive ($1,500/vCPU)


Just an idea like that : deploying a quick k3s/k3d with a PostgreSQL operator like CloudNativePG might help you to take away some of the maintenance overload.


We maintain a contract with RedPill/Linpro for managing a DB cluster on Hetzner. When we ordered it it was cheaper to use Linpro + Hetzner than hosting the DB on Digital Ocean...


I happen to run a company that specializes in this: https://www.ayedo.de


Does your company only cater to people who speak German?


We have been running Zalando's Patroni in K8S on Hetzner and couldn't be happier.


Noob question: if I rent one of these dedicated servers, what happens if some hardware fails? Do I need to contact support or will they detect that automatically? If something needs to be fixed manually (e.g., hard drive, cpu, network), how long does one need to wait for dedicated servers?

I'm used to cloud VMs where if one dies, I can quickly spin up another one effortlessly (I never have to contact support or anything like that).


I don't know about Hetzner but my experience with OVH on dedicated servers and failures is like this: they detect when the server is down, mainly when it's off or doesn't ping, and then they try to boot on their debug distribution and perform some checks on the machine. They don't monitor other health issues however (how would they since you are running your own system?) and therefore don't do anything before they detect a "down" status.

Some failures I experienced and had to monitor/detect myself were: overheating (they replaced thermal paste when I told them I saw strange readings from the CPU stats), raid disk failure or ssd high burn (ie. partial failure, server still running, they replaced the failed disks after I told them).

Most of the time the issues have been resolved within 1-4 hours on low-cost Kimsufi and SoYouStart offers, even on weekends and nights. Often when the server is running they can require a shutdown.

I'm quite happy with this as I am highly technical in those subjects and like to look under the hood, but with dedicated servers you really have to do some more maintenance/monitoring/planning yourself.


I don't think this is accurate. I have rented an OVH dedicated server (through SoYouStart) for about the last decade, let me share some maintenance experiences:

> They don't monitor other health issues however (how would they since you are running your own system?) and therefore don't do anything before they detect a "down" status.

My server has a hardware raid card. I have had one incident where OVH contacted me and said there was an issue with one of the drives, and that they will reboot the server at X time to replace it. They did so, and the problem was solved with no requests or intervention on my part.

I had another incident where I was told the motherboard died. IIRC, it died around 1am my time and was replaced by 5am my time. They of course turned the system back on for me. I was asleep the whole time, and this was likewise solved with zero requests or intervention on my part.

Besides this, I can count the number of times an internet or power issue made my server unreachable on a single hand. IMO, a great experience for a dirt cheap host.

That all being said: OVH's ipv6 solution is laughably bad and is the single reason why I would switch hosts, if a better one with a north American presence appears.


What you describe are hardware failures, and as I said they detect hardware failures. When the server goes down, they are on it by themselves.

But somes issues are not failures and you have to work on them on your side.

Most of the time the raid is software nowadays for example.

IPv6 works fine for my many servers at OVH.


I don't see that in what you wrote but I did see the mention of them not monitoring raid. Hence one reason why I thought your comment is inaccurate, per my experience.


Technically stuff like overheating they could monitor via IPMI (which they most likely use for OOB control anyway)


The thing is, it physically wasn't overheating because there was a process on the system (intel/ubuntu) that was hugely throttling the CPU to make it not overheat. So the machine was almost useless, very slow, but the temperature was ok. When the throttling mechanism was disabled, it indeed overheated. It's only because of those throttling system processes that I found out about the physical problem.


Just like others already replied, hardware monitoring on OS level is up to you. For disks, they even provide documentation on how best to set up smartd on Linux, and I'm sure they have some similar docu for Windows. In fact, their technical documentation is excellent in general.

But they often go even above and beyond for you. I rent several servers from them for many years, and I've had it happen once or twice that I got an e-mail from their datacenter team telling me that they noticed an error LED blinking on one of my servers, and actively offered to plan a repair intervention. All I had to do was to come up with a downtime window and communicate it to them. Very slick.

I'd say about half of overall value of Hetzner is in their quality support.


In my experience support will gaslight you into thinking it is your problem. I had a Hetzner server that was shutting off at random hours several times per week.

I showed them the sudden loss of power events in the logs. "It must be a problem with your OS modifications that we don't support".

OK, I wiped the machine to the stock image that you provide and it's still having power loss events. "Sure, we'll run a stress test for a couple minutes ... stress test passed OK, it's still your fault!".

The events happen randomly during the week, a stress test is not going to show that. Can you just move me to a different physical machine? "No."

This was over the course of several days, when I had an event coming up that I NEEDED the server for. I ended up going back to Azure and paying 10x the cost, but at least it worked great.


I am not much of a conspiracy theorist, but after going to the Hetzner site to look at my support history I was presented with this:

https://i.imgur.com/3DKc9OC.png

I have never seen this page before when trying to login. Make of that what you will.


To be honest, that's a incredulous leap in logic: Assuming someone from Hetzner is just name searching their brand, found your comment, looked up your account, and then "blocked" your client.

That's some dedicated client response team if so!


I just did this recently with a dedicated server in RAID 6 where a disk failed. They have a page on their wiki to walk you through it, but basically you boot up into the rescue system (network boot, activated on user panel). Then you figure out the failed disk based on existing or missing serial number, input that into the support form, request replacement. This was done in about 20m, then I rebuilt the RAID, rebooted it was fine


Yes, you contact support provide them a description of what is faulty (disk with the exact serial number) and they gonna replace it usually within 30-60 mins.

Provisioning of servers was always quite fast. Same day or the next business day.

My experience is a little dated, I used to order bunch of dedicated boxes from them for our clients and with Hetzner we always had the best experience. Also the most bang for the buck.


After spending years with Rackspace, Host Europe and renting a cage I never want dedicated servers. That's all I'm saying.


Hardware inside server is up to you to notify Hetzner. After more than 10 years never have an issue but disk degradation after many use.

Then you contact support, appoint disk change, you first deactivate disk on raid (save geometry etc), they replace disk and then you rebuild raid in new disk. That's it. With SSD you may not even need to do this anymore.


> Then you contact support, appoint disk change, you first deactivate disk on raid (save geometry etc), they replace disk and then you rebuild raid in new disk. That's it.

I imagine this would take time, right? Like not 5 minutes, but maybe 3 hours top? So, if I pretend to run a saas (that shouldn't be down more than 1h/day), then renting only 1 dedicated server could be qualified as "risky"?


RAID is not a replacement for backups anyway. There are many other ways to lose data besides physical disk failures. There are also ways to lose the RAID in face of physical disk failure. It's an availability solution (a box can probably keep running without an outage without resorting to backups).


You should be running at least three, preferrably four (3 + hot spare) mirrored instances if you require that level of uptime, regardless of provider or tech (bare metal/VM).


Well, yes, having a single server is risky in any production context if it's storing any sort of state that can't be easily brought back up on another server


I'm not sure what "deactivating RAID" means.

They will all be hot-swap disks. You remove the old disk and slide in the new one (or in this case, tell them to do it). The RAID system rebuilds the array in the background over the next few hours.

During that time you will lose data if it's RAID 5 and another disk fails.


> I'm not sure what "deactivating RAID" means.

mdadm --manage <array> --remove <failed disk>

so your machine doesn't have a fit when the disk is detached. Or equivalent.


Presumably he means detaching the disk from your RAID solution so it doesn't freak out when it's physically removed and replaced.


I am using software raid servers, as other commenters say is not a hot swapping disks operation, is cold swap :D


Rebuilding raid doesn't take the server down but disk performance i.e. IPOS will be reduced while it takes place Running service on 1 server always carries some risk whatever form that server takes

For example I have loads of stuff on Linode but always make sure I keep backups off-linode, incase I get a random TOS account shutdown and they stop speaking to me etc


Absolutely, some redundancy is necessary if uptime is critical (as well as backups).


Is risky, 3 hours top is a good estimation. You can appoint for an out of business hours change. But better choose another strategy start cloning disk at first failure notice and cancelling old server.


I once started off a SaaS business on Hetzner, all good and super cheap. Some prospects were turned off by us hosting their data on a "no-name" provider. Switched to AWS, total door-opener, no more questions asked. In hindsight we would not have been able to sign contracts this fast without a big name provider.

IT departments really need to revise their due diligence processes. I wonder how many folks were coerced to do a similar migration just to benefit from household brand credibility.


I'm considering using Hetzner (or equivalent) for our CI setup with Github Actions custom runners. Right now we're on CircleCI and it costs an arm and a leg.

Does anyone have experience to share with that kind of setup? What's the maintenance like?


I'm using Hetzner for our CI setup with GitLab-runners. With a simple cloud.init script, it's cheap and I have nearly 0 maintenance: https://gitlab.com/21analytics/gitlab-runner-cloud-init


Same, only maintenance is that it runs out of disk space every once in a while. Have a cron script that does various docker pruning tasks now, so it's become fewer and farther in between very time I discover a new leak.


I had the same issue so I made the cloud-init script create a systemd service that does the docker cleaning.


That's exactly one of my use cases, I have setup runner for 2 projects last week and action runtime went down by a large margin, from 10-15 to 3-4 minutes.

I use single dedicated server that costs ~40EUR/month, AX41-NVME, and each runner is a separate user account to allow for some isolation.

Depending on your setup, you might need to spent some time adjusting jobs to have proper setup/cleanup and isolation between them (but it's not really Hetzner specific, just general issue).


This was one of my Q4 projects at work last year. We moved CI to 3x hetzner machines, each running four copies of the self-hosted github runner, and drove our build/test times from >20min down to 3-4 min on average. It's ridiculous how big a difference running on a capable bare metal box makes. We run a thousand or so builds daily and pay about 300 euro a month for the setup; our overage fees from github actions were often higher than that. Reliability has been "ok": one of the machines started throwing errors that smell like bad RAM/CPU (bus errors, random reboots, etc), we raised a support ticket, they nuked it and gave us a fresh one.

We provision them with ~200 lines of shell script, which we get away with because they are not running a "prod" workload. Don't forget to run "docker system prune" on a timer! Overall these machines have been mostly unobtrusive and reliable, and the engineers greatly appreciate the order of magnitude reduction in github actions time. I've also noticed that they are writing more automation tooling now since budget anxiety is no longer a factor and the infrastructure is so much faster.


Short answer it depends on your needs and you comfort level with server management. But generally I would say a server running something like Jenkins or similar is not that taxing on the maintenance budget. It does have some up-front cost in getting everything configured and running as you like but after that it is fairly easy to maintain. I usually automate the provisioning as much as possible both for the self-documenting aspect but also to make it easier to repave the system or spin up additional nodes as needed.


I’m running a kubernetes cluster on Hetzner with the GitHub actions runner chart. Zero issues and much faster (and cheaper!!) than GitHub’s runners.

My only issue is that security scanners can’t run on self-hosted runners (GitHub refuses the artifact result, so technically, they do run, but the results fail to upload).


> or equivalent

Do you have any alternatives? I thought Hetzner was fairly unique in their dedicated server offerings (for the price, I mean).


OVH (with their low-end brands such as SoYouStart and Kimsufi) are in the same ballpark.


I have a question. These newer Intel machines all have performance and energy efficient cores. This makes sense as they are designed for the Desktop, but is this meaningful for server-side applications?

Recent Linux kernels finally support these CPUs (do they have full support?), but if you host a service where you want predictable (and fast) response times why you use the mix of both cores? Or would you just turn off those efficient cores for the server-side usage?


I don't expect the efficiency cores to be slow enough to have noticeable effect on response times. Rather they'll just have smaller throughput, and other cores will pick up the rest when the server is loaded.

I'm assuming you don't shoot yourself in the foot by running strictly single-threaded workflow explicitly pinned to the efficiency cores.


We have tried it for the older generation and there is definitely a different response time. Why should it be different? Surely our workload is more computation intense than others but I'm pretty sure that you'll get a difference for other workloads too.

> running strictly single-threaded workflow explicitly pinned to the efficiency cores

Those cores are slower than e.g. the cores from the (Desktop) AMD CPU we tested at the same time (offered from Hetzner). So it is rather expensive and inefficient to use Intel (Desktop) CPUs for server-side applications as we can only use their performance cores.


Anybody who's never set up a dedicated service or colo will be pleasantly shocked at how cheap it is compared to the pay by hour cloud.

When these guys open up dedicated servers in a USA region it's going to be huge. Unfortunately, at the moment only the cloud offering is available in the USA so you're stuck with a bit of latency round tripping to the EU.


That's exactly what I'm waiting for too. I'll be snagging at least one once it's available.


OVH has a Montreal data center I use.



That also just leads to the homepage after a short "security check"

Weird. It seems like they are reading the origin header or something and just redirect HN users to the root of the website.

Works fine if you copy the link and paste it in a new tab.


This website disables text selection on mobile. I can never understand why a company would do this. There's several technical terms on the page I want to look up, why make this difficult for me?


That's just as crap as the websites that either disable the back button, or fill it full of redirect pages, so you can't go back without opening a new tab. Bring back the days where we could just disable JavaScript.


I can only recommend hetzner. At my former company, we started with 8 servers with them nearly 15 years ago and now have more than 6000 servers with them.

https://www.hetzner.com/customers/talkwalker


Interesting! How did you get GPU servers? They used to be available but can you still get them? And can you share more details about the automated hardware failure monitoring?


Wish I could actually register, as mentioned in another thread they have both faulty and yet extremely aggressive registration check where they make you send them your ID with a bunch of other things and yet will still reject you based on who knows what.


I've seen a lot of spam coming from their IPs, so they probably have a lot of headache with filtering out bad customers.


tl;dr Mini rant (pro dedicated, anti-cloud)

Amazon has done an amazing job of convincing people that their hosting choice is between cloud (aka, AWS) or the higher-risk, knowledge intensive, self-hosting (aka, colocation). You see this play out all the time in HN comments. CTOs make expensive and expansive decisions believing these are the only two options. AWS has been so good at this, that for CEOs and some younger devops and developers, it isn't even a binary choice anymore, there's only cloud.

Do yourself, your career, and your employer a favor, and at least be aware of a few things.

First, there are various types of hosting, each with their own risk and costs, strength and weaknesses. The option that cloud vendors don't want you to know about are dedicated servers (which Hetzner is a major provider of). Like cloud vendors, dedicated server vendors are responsible for the hardware and the network. (If you go deeper than say, EC2, then I'll admit cloud vendors do take more of the responsibility (e.g. failing over your database)).

Second, there isn't nearly enough public information to tell for sure, but cloud plays a relatively minor role in world-wide server hosting. Relative to other players, AWS _is_ big (biggest? not sure). But relative to the entire industry? Low single-digit %, if that. The industry is fragmented, there are thousands of players, offering different solutions at different scales.

For general purpose computing/servers, cloud has two serious drawbacks: price and performance. When people mention that cloud has a lower TCO, they're almost always comparing it to colocation and ignoring (or aren't aware of) the other options.

Performance is tricky because it overlaps with scalability. But the raw performance of an indivisible task matters a lot. If you can do something in 1ms on option A and 100ms on option B, but B can scale better (but possibly not linearly), your default should not be option B (especially if option A is also cheaper).

The only place I've seen cloud servers be a clear win is GPUs.


Based on my two decade experience in the field ranging being part of the team that was building super computers to one that was moving amazon.com to aws, there are many dimensions that you need to consider for computational workloads.

The primary deciding factor is always security. You simply cannot use any small vendor because of the physical security (or the lack thereof). Unless of course you do not care about security. If a red team can just waltz into you DC and connect directly to your infra is it game over for some businesses. You can easily do this with most vendors.

The secondary deciding factor is networking. Most traditional co-los have very limited understanding of networking. A CCIE or two can make a real difference. Unfortunately those guys usually work some bigger companies.

The third deciding factor air conditioning and electricity considerations. Worst case you are facing an OVH situation. https://www.datacenterdynamics.com/en/opinions/ovhclouds-dat....

(It is really funny, because I have warned them that their AC/cooling solution is not sufficient, and they explained to me that I am wrong. I was not aware of the rest (wooden elements, electricity fuckups, etc.)

"""During the year, an article in VO News by Clever Technologies claimed there were flaws in the power design of the site, for instance that the neighboring SBG4 facility was not independent, drawing power from the same circuit as SBG2. It's clear that the site had multiple generations, and among its work after the fire, OVHcloud reported digging a new power connection between the facilities.""")

The fourth would be probably pricing. TCO is one consideration, after you made sure that the minimum requirements are met, but only after.

So based on the needs somebody can choose wisely, based on the ___business requirements___. For example, running an airline vs running a complex simulations have very different requirements.


AWS and GC (and I assume Azure, but I haven't looked) have definitely set the standard with respect to checking off all the boxes when it comes to helping customers with security audits and requirements. This is a place other provides have serious lagged. I've been involved in cases where using the cloud is essentially a pass and not using the cloud raises red flags.

From a sales point of view, I agree with you that, for a lot of folks, this might be the main concern. If you're doing B2B or government work this might be, by far, the most important thing to you.

However, this is at least partially pure sales and security theatre. It's about checkboxes and being able to say "we use AWS" and having everyone else just nod their head and say "they use AWS."

I'm not a security expert (though I have held security-related/focused programming roles), but as strong as AWS is with respect to paper security, in practice, the foundation of cloud (i.e. sharing resources), seems like a dealbreaker to me (especially in a rowhammer/spectre world). Not to mention the access AWS/Amazon themselves have and the complexity of cloud-hosted system (and how easy it is to misconfigure them (1)).) About 8 years ago, when I worked at a large international bank, that was certainly how cloud was seen. I'm not sure if that's changed. Of course, they owned their own (small) DCs.

(1) - https://news.ycombinator.com/item?id=26154038 The tool was removed from github (conspiracy theory!), but I still find the discussion there relevant.


> The primary deciding factor is always security

so, anywhere where your workloads or data are physically co-located on the same hardware as someone else's should be automatically disqualified, right?


No. It is only a risk if an attacker can use it somewhow. Would you show me a scenario how could an attacker gain knowledge of which physical server my lambda function is running AWS and break out from the container and get access to my container? This risk is acceptable for most workloads. Maybe not for the tree letter gov agencies, this is why they got gov cloud. You see, again, what is the use-case? What is the risk? What risk is acceptable?


> Do [...] your career [...] a favor, and at least be aware of a few things. [emphasis mine]

Doing your career a favor is how we ended up in this situation in the first place. The tech industry had way too much free money floating around that there was never any market pressure to operate profitably, so complexity increased to fill the available resources.

This has now gone on long enough that there are now entire careers built around the idea that the cloud is the only way - people that spend all day rewriting YAML/Terraform files, or developers turning every single little feature into a complex, failure-prone distributed system because the laptop-grade CPU their code runs on can't do it synchronously in a reasonable amount of time.

All these people, their managers and decision makers could end up out of a job or face inconvenient consequences if the industry were to call out the bullshit collectively, so it's in everyone's best interest to not call it out. I’m sure there are cloud DevOps people that feel the same way but wouldn’t admit it because it’s more lucrative for them to keep pretending.

This works at multiple levels too, as a startup, you wouldn't be considered "cool" and deserving of VC funding (the aforementioned "free money") if you don't build an engineering playground based on laptop-grade CPU performance rented by the minute at 10x+ markup. You wouldn't be considered a "cool" place to work for either if prospective "engineers" or DevOps people can't use this opportunity to put "cloud" on their CVs and brag about solving self-inflicted problems.

Clueless, non-tech companies are affected too - they got suckered into the whole "cloud" idea, and admitting their mistake would be politically inconvenient (and potentially require firing/retraining/losing some employees), so they'd rather continue and pour more money into the dumpster fire.

A reckoning on the cloud and a return to rationality would actually work out well for everyone, including those who have a reason to use it, as it would force them to lower their prices to compete. But as long as everyone is happy to pay their insane markups, why would they not take the money?


SVB offered a perk of free AWS and Google Cloud credits to new startup clients, so even more "free money" to entrench the startup ecosystem in the cloud.

https://www.svb.com/account/startup-banking-offers


I think the performance argument is huge as there are a lot of factors at play here.

For one, people generally underestimate the performance cost of their choices. And that reaches from app code, to their db and their infrastructure.

We’re talking orders of magnitude of compounding effects. Big constant factors that can dominate the calculation. Big multipliers on top.

Horizontal scaling with all its dollar cost, limitations, complexity, maintenance cost and gotchas becomes a fix on top of something that shouldn’t be a problem in the first place.


Question for the audience: if Hetzner's highest-specced bare-metal servers would work great for my use-case, except that my customers are mostly in America and my service is realtime-ish enough that I need a low-latency RTT to them, then what's the next best option? What's the "Hetzner of North America"? (I know Hetzner has US datacenters, but they don't do any dedicated hosting there AFAIK, only cloud hosting.)

Personally, so far, the best near-equivalent provider I've found that actually offers well-specced machines in North America, is OVH, with their HGR line and their Montreal DC. Are there any other contenders?

And if not, why not? what's so hard about getting into the high-spec dedicated hosting space in the US specifically? Import duties on parts, maybe? (I've found plenty of low-spec bare-metal providers in the US, and plenty of high-spec cloud VM hosting providers in the US, and plenty of high-spec bare-metal providers outside the US; but so far, no other high-spec bare-metal providers in the US.)


We've just finished migrating from Hetzner's dedicated servers in Germany to their Cloud US VM's for improved latency. You don't get the same raw performance as their dedicated servers, but they're VMs are still good performers that still ended up being the best value US cloud provider we found [1]

[1] https://servicestack.net/blog/finding-best-us-value-cloud-pr...


I'll be sure to take a look, but we're really depending on the "high-spec" part as well as the "low RTT" part. IIRC Hetzner's cloud offerings only go up to 48vCPU+192GB memory+1TB disk, all of which are well below our needs.

We're currently using these at OVH: https://www.ovhcloud.com/en-ca/bare-metal/high-grade/hgr-hci... — and we really need the cores, the memory, the bandwidth, and the huge gobs of direct-attached NVMe. (We do highly-concurrent realtime analytics; these machines run DBs that each host thousands of concurrent multi-second OLAP queries against multi-TB datasets, with basically zero temporal locality between queries. It'd actually be a perfect use-case for a huge honking NUMA mainframe with "IO accelerator" cards, but there isn't an efficient market for mainframes—so they're not actually price-optimal here compared to a bunch of replicated DB shards running on commodity hardware.)


OVH also has US East Coast and West Coast datacenters, so you can get a little closer to your customers if you need the latency. Though in my experience, support isn't great for any of the OVH affiliated companies.


> support isn't great for any of the OVH affiliated companies

Also they'll run off with your money if you can't provide an ID after you've already paid. No service but no refunds either.


Would love to see these offerings being available in the Ashburn VA location.


Shining, dazzling, remarkable, perfect, gems! There's some great marketing words for you.

But seriously, there's been lots of talk on HN recently about alternatives to the big clouds. This is it - rent a big server and do it all on Linux.


At the same time they removed quite some server options, which fitted in between the current specs. This is not so nice, my upgrade path is interrupted now and I need to order a much bigger and more expensive machine.


They sell pretty much everything they had in the past 10 years in the Server Auction https://www.hetzner.com/sb


This is true, but this is second hand hardware and you can't be sure how used it is. Fair enough, good enough for most cases.


Yeah when I land on your web site and I get:

  Request on Hold - Suspicous Activity Detected.
I'm turned off instantly. I'm on a static IP on a respectable ISP in the UK and I'm not waiting in a queue for 57 seconds and counting to look at the sales pitch.

Edit: so I use that time wisely to shitpost about it on HN, then check TrustPilot and I see:

"Unfortunately, based on your description (I need a ticket number or other customer information to find you in our system), you accidentally resembled an abuser."

Not a good outward appearance. I'll stick with AWS and paying through the nose.


Yeah companies and their small minded "sysadmins" are very eager to use an IP address as the primary predictor of hostile intent, sometimes they add user-agent as well. Not sure why but this is the sort of behaviour why I cannot trust such companies with my business.


You don't know how bad it is. As a third worlder, I tried to sign up for Hetzner storage once, my account was immediately flagged as suspicious. That by itself isn't a big deal, much bigger web services do the same. But they sent me a link to some third party identity confirmation service, which made me do a dumb song and dance of holding an ID in front of my face in front of a webcam to verify myself and had both automated and human(?) stages. That by itself took a dozen tries to get through. But even after that, Hetzner still deemed my account as suspicious, I had to write a couple paragraphs of explanation to justify my existence. I assumed after this they should have no reason to not let me through. But 3 days later I got a mail saying the account was permanently banned from Hetzner.


This is exactly why I do not want to use such companies. I do not want to be an accessory in this racist crap. Your IP does not define who you are. Hetzner could two things:

- stop operating in countries they don't want business from

- treat people equally

What they are doing is: pathetic.


To put that in perspective, I am currently in a developing country and CloudFlare websites are making me click 'I am human' checkboxes dozens of times per day.


HN does the same


Sorry but who cares what HN does?

Is this a business? No.

Should we follow any of the practices of HN? I do not think so. My personal website has a more scalable infrastructure than HN.


HN is basically glued to CloudFlare from a business perspective so do we expect anything else?


I tried to register with them once and also hit one of those automated trip wires, so they made me send them my ID with a bunch of other stuff and then _refused to create an account for me_. The internet is full of stories like mine, so yeah, their abuse detection is total crap.


Maybe you're behind «Carrier-grade NAT», sharing your public IP with thousands of other customers?


No I've got a paid static IP which I exclusively own on a business service line.

There is no excuse for being a victim of an algorithm.

And I never get this anywhere else!


My static IP was erroneously blocked from Wikipedia as some previous owner used it to run open proxies or the like. Never had any problems anywhere else. It happens.


"It happens" is a bad defence.


It's not a defence, it's just reality. Abuse is real. Using IP addresses to combat abuse is very broken – I too had carrier-grade NAT when I lived in a developing country in the past – but it does work. Abuse prevention is hard, but very much required. Abuse prevention with zero false positives is nearly impossible with the current state of technology. So yes, "it happens" because all of this is hard. You can pretend the problem doesn't exist, but that's denying reality.


As you might understand that is exactly what everybody, both the guilty and the innocent, says.


In a court of law I am innocent until proven guilty.

In technology circles I am guilty until proven innocent.

That's the difference, the outcome of which is the technology provider can quite frankly fuck off.


It’s the ‘the customer is always right’ philosophy, clearly by someone who has never worked retail or services.


This is not retail.


Isn't that exactly what it is?


Gosh, I wish there were something similar in the US. Right now I'm on the IONOS "server of the month[1]," which is $33/mo for dedicated six-core, 16GB, 1TB HDD. I run game servers, which could benefit from SSDs and more RAM, but I'm priced out. I'd take the 44€ config in a heartbeat.

Is anybody aware of anything that's price competitive in the US (or within a 50ms ping)?

[1] https://www.ionos.com/servers/value-dedicated-server#package...


> Is anybody aware of anything that's price competitive in the US (or within a 50ms ping)?

OVH [1] is not quite as cheap, but I can't really think of anyone else in the area that is totally comparable. One draw of OVH, Hetzner, etc, for me over the truly small, cheap dedicated server providers is they both have pretty decent networks and free DDoS mitigation, which is really nice for things like game servers and such where CloudFlare isn't an option.

OVH's sub-brands like SoYouStart [2] will sell you decently specced dedicated servers started at around $30 a month in Quebec, which tends to be more than good enough for most of my "US" needs.

They do have a couple datacenters in the United States too, not just Canada (+ quite a few in Europe, one in Singapore, some in Australia, etc), but I believe the Virginia/Oregon servers aren't available on the cheaper SYS site -- still cheap, though, but not quite $30 cheap.

[1]: <https://www.ovh.com>

[2]: <https://www.soyoustart.com/> (main downsides compared to OVH proper is the connection is capped at ~250Mbps, and although all servers have DDoS mitigation, the SYS and Kimsufi servers don't allow you to leave it on 24/7 -- so when you get attacked, it might take a minute or so to kick in, and then it'll remain on for 24 hours, I believe)


Hetzner opened at least one US location with more in the way, I think.

Edit1: missed word;

Edit2: people pointed below that the us locations don't have dedicated servers, cloud servers only;


Thanks. I took a look, it seems that they only have the "cloud servers" there, no dedicated. Those aren't as economical, as a 64GB server like the 44€ dedicated model starts at 153€/mo.


Shame, hopefully they will expand their offerings.


US locations are cloud-only.


I like ReliableSite (reliablesite.net) - not as cheap as Hetzner but they they have a few US locations to pick from.


WholesaleInternet


Shouldn't surprise anyone, but Hetzner was also the best value US Cloud provider we found [1].

We've been a happy dedicated server customer since 2013, although we've just finished our transition from their dedicated servers in Germany to US Hetzner Cloud for improved latency.

Always happy to see Hetzner's name in lights, DHH also used Hetzner as a deployment target when announcing their MRSK tool that 37 Signals are using to automate deployments as they begin transitioning off the cloud.

[1] https://servicestack.net/blog/finding-best-us-value-cloud-pr...

[2] https://www.youtube.com/watch?v=LL1cV2FXZ5I


The very same day Hetzner launches dedicated, non-cloud, servers in North America based location - east or west coast - I will buy one and move off of OVH.


How do people intend to use these hybrids in server roles? It would take some extra sophistication compared to a uniform CPU architecture. For example, in low-latency service you would not want your netrx IRQs served from efficiency cores. But you can imagine lots of crud these cores could handle aside from main serving path. I'd be tempted to boot them with isolcpus parameter.


"... [Types of] Dedicated Servers", right?


Nah, they just announce whenever they rack up new servers.

Yeah, types of.


If you care about your carbon footprint, hosting in Germany (where Hetzner has most of its DCs) is unfortunately not a great option.

France (OVH), Sweden, Norway, and (usually) Switzerland are much better (thanks to nuclear for France, and hydro for the others).

See https://app.electricitymaps.com


Hetzner in Germany seems to use hydropower:

https://www.hetzner.com/unternehmen/umweltschutz/


Hetzner has a data center in Finland too


Hetzner also has a pop in Finland.


It was sad moment, but this week, after 15 years I have shut down and deleted my last Linode vps.

Migrated to Hetzner, I hope it will be ok.


Why should we bother with Dedicated servers for mission critical workloads ? The advantage of VMs is that I could delete a VM at the earliest sign of trouble and spin up a new one without even making any attempt to contact support. When a dedicated server fails , we en up playing a game of Tag with technical support.


you can do this now. you can but 10 bare metal machines, cheaper that any aws/etc add docker and done.


I wish you could rent something like a dedicated server on an hourly basis.

As I understand it, cloud VMs are not capable to run large neural networks like StableDiffusion, Llama etc because they don't have a GPU, right?

I would like to play with these new toys, but I don't know where I can get a cloud machine capable of running them.


Server with GPUs are expensive because no provider wants to turn their datacenter into someone else's mining rig. You can get them, but they'll cost you.

In Hetzner's case, you'll find some with GTX 1080 (https://www.hetzner.com/sb?additional=GPU), but they'll cost you at least double of what they'd cost without one.


You can definitely run a Cloud VM with GPU and pay by the second. I've used Google Cloud for this and it works very well


GPU instances do exist on all major providers. I used some on Azure for sure.

If you just want to play with DL, I suggest Google Colab Pro.


Lots of GPU Clouds around. To play with toys I use runpod's community cloud. Very affordable.


Best thing about Hetzer is their cloud UX. Compared to the complicated and nonsense and inconsistent alternative out there ! And they actually give prices per month by default. Some cloud vendor seems to hide or make it hard to find the cost per month option !

Oh and their service is great :)


Too bad the entry-level offers still have such small SSDs. My EX41 from 2017(!) has SSDs that size. It’s high time to up the minimum size.

I don’t need 64 GiB of RAM or a gazillion CPU cores or whatever, I just need reasonably-sized (1 TB+) storage. Not HDDs, but SSDs.


they allow to add ssd at extra cost.


At tremendously steep premiums – RAID1 (or equivalent) is a must, after all. Some combinations are confusingly not possible (adding two NVMe SSDs – gotta select “Datacenter Edition” SSDs).

Might as well get a bigger server for the higher price.


> At tremendously steep premiums

I think it is about $10/TB, looks reasonable.


Off topic question about Netzner. Do they bill US customers from a US location? I avoid non US companies because I have to unlock non-US locations for payments and that is a hassle and I prefer to leave that payment restriction in place.


it keeps bringing me to a 500 error, and now i'm banned for 60 seconds with "Request on Hold - Suspicous Activity Detected" ...?

edit: seems the entire hetzner site is doing this, redirecting me to /_ray/pow which is an error


At least it's telling you that in this case.

They also "shadowban" users, i.e. you'd try to register only for them to lock your account, make you send them your ID and such, and then still refuse to open an account for you. All because their crappy fraud detection system didn't like the IP from which you sent it or something.


Have you even considered that you are a robot?


cue existential crisis


Same thing, not the best ad for their own Security product :)

Heray is a security product powered by Hetzner



Total noob on dedicated servers and I have been thinking about getting one for fun and learning.

What happens when hardware component break? Does the price include repair?

What happens when storage fail? Are data backupped or it is on me to make them?


All hardware is theirs, you rent it. They repair it when it fails at no cost, but they don't monitor everything (althought some come with agent to install on dedicated server that passes monitoring data to their system).

So if you say see SMART errors on hard drive you have to open ticket with them to fix.

> What happens when storage fail? Are data backupped or it is on me to make them?

You have full control over server. If you want backup you need to set it up yourself.

I'd start with VM if you want to learn ops stuff. Those also usually have backup option at some extra cost.

kimsufi (ovh brand) also have some cheap ones, I have some atom serwer with TB drive for some offsite backups and few services and so far no problems.


Dedicated server provider is responsible for the hardware and the network. In this very important aspect, it's really no different than cloud (so yes, the price "includes" the repair).

It depends on the exact provider, but as a general rule, they are (a) slower to repair and (b) not as proactive (e.g. they might not detect an eminent failure). On even a medium-sized contract, you can generally work in 1-4 hour repair windows (many providers even offer this as standard), and better ones are definitely a bit more proactive.

There's like 1000 different dedicated provider, so it will depend on what yours offers, but unless it's a "managed dedicated provider", you're going to be responsible for your own backups. At best, you'll get some external storage you can use (e.g. Hetzner include free 100GB ftp you can use, if I recall).


> they might not detect an eminent failure

*imminent.


For a host like Hetzner, everything except for hardware replacement is up to you.

> What happens when storage fail? Are data backupped or it is on me to make them?

Servers are configured as RAID1 arrays, so after a disk fails you can ask them to replace it for you at a convenient time and no data should be lost. You're 100% responsible for backups, though.


Yes, Hardware repairs are included. It a disk fails you have the option to get a used replacement for free or you can pay for a new one.

You have to do your own backups, but they offer storage for that.


Really hope Hetzner expands their US offering to included dedicated servers.


Strange that they are using desktop CPUs instead of server CPUs... I guess having as many cores per rack as possible isn't a thing anymore (at least to hetzner)


its likely desktop motherboard + cpu is much cheaper than server one.


It's a pity you can't get the 8TB (or 16TB! a man can dream) SSDs for these new servers (only for AX101) for a nice high density high perf storage



To me the TIL is that there are now desktop intel processors that support proper ECC. This no longer being a Xeon-only feature is really neat.


I always liked Hetzner. Since the days in which I've was deploying automatically with a CI before the cloud was called the cloud ~2009


Wow. 16 threads, 64 GB RAM and 2 TB NVME SSD will make a fantastic Postgres/Mongo node for a fraction of managed solutions cost.


you lose fault tolerance which one of the hard parts of running postgres.


Just spin up three of these babies.


Hard part is to configure, maintain and monitor replication and failover.


I want a Hetzner-based vercel alternative and a fully managed database. Can someone build this please? ;)


Yep. I’ve built one and am selling it at a premium. It’s called Vercel.


I'd love to see more of their dedicated servers in the US. The prices are really good.


Been using hetzner in Europe for about a decade now. Every year or so look to change into another server and you get better hardware or cheaper prices. Their machines are tough, never had hardware troubles.


That’s good pricing - would have expected more of a premium for ddr5 and gen 4 ssd


I've used them in the past. Great customer service, good value for the money


Why are there no pulumi/terraform providers for Hetzner?



I've had plenty of issues with Hetzner and I wasn't even a heavy user.


Nothing interesting.


That's an ad


It is a free ad.


Why is this news? This looks very much like marketing to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: