Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Containerization and container orchestration platforms are only partly about scalability.

The primary appeal for me is ease of deployment and reproducibility. This is why I develop everything in Docker Compose locally.

Maybe the equivalent here would be something like Guix or Nix for declaratively writing the entire state of all the desired system packages and services + versions but honestly (without personal experience using these) they seem harder than containers.



I'm not deploying; this is the server. I do backups, and I keep config in git.

Reproducibility? This is the server. I will restore from backups. There is no point in scaling.

If you want to argue that containerization and VMs are portable and deployable and all that, I agree. This is not a reasonable place to do that extra work.


Hey, do whatever floats your boat. Nobody said there is a single solution to every problem.

Don't pick up a fight because you are satisfied with your solution that is different from somebody else's solution.

I personally like docker-compose and Vagrant for my private services and development environments.

I use Vagrant for when I need a complete VM. Think in terms of VM for doing embedded development where I need a large number of tools in very specific versions and I need them still working in 3 years without maintenance even if I change a lot about my PC setup (I run Linux everywhere).

I create separate Vagrant for every project and this way I can reinstate complete environment at a moments notice, whenever I want.

I use docker-compose for most everything else. Work on an application that needs MongoDB, Kafka, InfluxDB, Graphana and so on and so forth? Docker Compose to rule them all. You type one command and everything's up. You type another and everything's down.

I use the same for my other services like mail, NAS, personal website, database, block storage, etc. Containers let me preserve the environment and switch between versions easily and I am not tied to the binary version of the Linux on the server.

I hate it when I run a huge amount of services and then a single upgrade causes some of them to stop working. I want to be able to be constantly updated and have my services working with minimum maintenance. Containers let me make decision on each of the services separately.


>I'm not deploying; this is the server.

Err, that's the very definition of deploying. Putting stuff on "the server".

What you mean is not that you're not deploying, you're not testing/staging -- you change things and test new stuff directly in your production server.


> Reproducibility? This is the server. I will restore from backups.

To me, reproducibility is more than about restoring the old bucket of bits I had. It's about understanding, about being able to reproduce the means that a system got the way it is.

With Kubernetes, there is a centralized place where where the cluster state lives. I can dump these manifests into a file. The file is human readable, well structured, consistently structured, uniformly describes all the resources I have. Recreating these manifests elsewhere will let me reproduce a similar cluster.

The resources inside a kubernetes cluster are just so much easier to operate on, so much easier to manage than anything else I've ever seen. Whether I'm managing SQS or Postgres or Containers, being able to have one resource that represents the thing, having a manifest for the thing, is just so much more powerful, so much better an operational experience than either having a bucket of bits filesystem with a bunch of hopefully decently documented changes over time on it, or a complex Puppet or Ansible system that can enact said bucket of bits. Kubernetes presents high level representations for all the things on the system, of all shapes and sizes, and that makes knowing what I have much easier, and it makes managing, manipulating, replicating those resources much much easier & more straightforward.


Wrapping a new abstraction layer around a single server does not help, it is an expense you do not need. "Recreating these manifests elsewhere" will not work, because there is no elsewhere.

You cannot add complexity to a system to make it simpler.

You cannot abstract away the configuration of a system when there is only one system: you must actually do the configuration.

There is no point in having a high-level representation of all the things on the system: you have the actual things on the system, and if you do not know how to configure them, you should not be running them.


> There is no point in having a high-level representation of all the things on the system: you have the actual things on the system, and if you do not know how to configure them, you should not be running them.

> You cannot abstract away the configuration of a system

I've spent weeks setting up postgres clusters, with high availability, read only replicas, backups, monitoring, alerting.

It takes me 30 minutes to install k3s, the postgres operator, & recreate that set up.

Because there are good consistent abstractions used up & down the Kubernetes stack. That let us build together, re-use the deployable, scalable architectures of lower levels, across all our systems & services & concerns. That other operators will understand better than what I would have hand built myself.

> ecreating these manifests elsewhere" will not work, because there is no elsewhere.

It'll work fine if you had an other elsewhere. Sure backups don't work if you have nothing to restore on.

Dude this is such a negative attitude. I want skepticism, criticalness, but we don't have to hug servers so close forever. We can try to get good at managing things. Creating a data plane & moving our system configurations into it is a valid way to tackle a lot of management complexity. I am much relaxed, many other operators are, and getting such a frosty negative "nothing you do helps" dismissal does not feel civil.


Not sure about reproducibility. If the HD fails, sure, restore from backup. But what if the motherboard fails, and you buy/build a completely new machine. Does a backup work then, even if all the hardware is different? That's where a container makes restoring easier.


A container does not make restoring easier in the situation you have described.

The host for the containers still needs to be configured. That's where changes to NIC identifiers, etc. need to be handled.

In my situation, the host gets exactly the same configuration. The only things that care about the name of the NIC are a quick grep -r away in /etc/; 95% of everything will be up when I get the firewall script redone, and because that's properly parameterized, I only need to change the value of $IF_MAIN at the top.


On Windows platforms that's usually true.

I've not met a linux system tarball that I can't drop on any other machine with the same CPU architecture, and get up and running with only minor tweaks network device names.


> Does a backup work then, even if all the hardware is different

Full disk backup, Linux ? Most likely. We rarely recompile kernels these days to tailor to some specific hardware, most are supported via modules. It could be that some adjustments are going to be necessary (network interface names? nonfree drivers). For the most part, it should work.

Windows? YMMV. 10 is much better than it was before and has more functional disk drivers out of the box. Maybe you need to reactivate.

The problem is mostly reproducibility. A system that has lived long enough will be full of tiny tweaks that you don't remember about anymore. Maybe it's fine for personal use but it has a price.

Even personal servers (including Raspberry Pis) I try to keep some basic automation in place so if they give up the ghost, they are cattle. Not pets.


As someone who just switched motherboard + cpu on my home server, the worst thing was to figure out the names of the network interfaces.

EnpXs0 feels worse than good old ethX with interface naming based on mac addresses.


It feels worse, but you'll be even less happy when you add or remove a NIC and all of the existing interfaces get renamed.

But if you really want to, you can rename them to anything you want with udev rules.


But this is happening with enpXsY! (Are you saying this was only happening with eth*?)

Whenever I remove a card from my first pci slot, the other two cards get renamed... Frustrates me a lot.


Why wouldn't it? Unless you are changing architecture after a component of your system dies, there's no reason your old binaries would not work.


Drivers, config you missed/didn't realise was relevant/wasn't needed before, IDs (e.g. disks), etc.

Nix or aconfmgr (for Arch) help.

I still like containers for this though. Scalability doesn't mean I'm fooling myself into thinking hundreds of thousands of people are reading my blog, it means my personal use can outgrow the old old PC 'server' it's on and spill into the new old one, for example. Or that, for simplicity of configuration, each disk will be (the sole disk) mounted by a Pi.


There's more than one way to skin a cat. If you're running something as simple and low profile as OP suggested, all you need to backup from the system are the packages you installed and a handful of configurations you changed in /etc. That could be in ansible, but it could be just a .sh file, really. You'll also need a backup of the actual data, not the entire /. Although, even if all you did was backup the entire / there's a good chance it would work even if you try to recover it in new hardware.

The services metioned by OP don't need to talk to each other, they are all things that work out of the box by just running apt-get install or equivalent. You don't need anything really fancy and you can set up a new box with part of the services if they are ever taking too much resources (which, for a small setup, will likely never really happen. At least in my experience)


Why do you feel the need to keep config in git if you've got backups? I think the answer to that is the same reason that I'd rather keep a record of how the server is customised than a raw disk backup.

I do think containerisation and VMs are more overhead than they're worth in this case, but there's definitely a lot of value in having a step-by-step logical recipe for the server's current state rather than just a snapshot of what's currently on the disk. (I'd favour puppet or something similar).


I keep config in git so that when I screw up, I can figure out how. What else would I be doing? Merging conflicts with my dev team?


> I keep config in git so that when I screw up, I can figure out how.

Right. Which is exactly why I want installing a new service, upgrading a library etc. to be in git rather than just backing up what's on disk. A problem like not being able to connect to MySQL because you've upgraded the zoneinfo database, or the system root certificates, is a nightmare to diagnose otherwise.


Nix/NixOS for this purpose is very nice.


Exactly! Other non-scalability concerns they address (specifically talking about Kubernetes here) is a primitive amount of monitoring/observability; no-downtime updates (rolling updates); liveness/readiness probes; primitive service discovery and load balancing; resiliency to any one single host failing (even if the total compute power could easily fit into a single bigger server).


Which of these are things you want on your house server? That's what the article author is writing about, and what I am writing about.

I do not need an octopus conducting a herd of elephants.


I can agree that the idea of reaching for Kubernetes to set up a bunch of services on a home server sounds a bit absurd.

"How did we get here?"

I'm not an inexperienced codemonkey by any means of the term, but I am a shitty Sysadmin. And despite being a Linux user from early teens, I'm not a greybeard.

As sorry a state as it may sound, I have more faith in my ability to reliably run and maintain a dozen containers in k8s than a dozen standard, manually installed apps + processes managed by systemd.

Whether this is a good thing or a bad thing you can likely find solid arguments both ways for.


You only have to learn 1 interface (albeit a complicated one) to use Docker/k8s compared with 1 interface per service to run them manually.


Hm, these days I feel like I only have to learn systemd. Reload config? View logs? Watchdog? Namespaces? It’s all systemd. If you are running on one machine, what does Docker/k8s give you that you do not already have?


> If you are running on one machine, what does Docker/k8s give you that you do not already have?

That feeling that you are part of a special, futuristic club.


Nothing, but its pretty common to have the home server plus a desktop/laptop where you do most of the work (even for home server), that may not be linux - in which case containers are the easiest way


I recently ended up setting up "classic" server again after a significant time keeping mostly containerized infrastructure on k8s.

Never again, the amount of things that are simply harder in comparison is staggering.


Sorry to sound pedantic, but what was harder? Containerized infra or a classic server? I assume the former but wanted to be sure.


"Classic" approach turned out to be maddeningly harder.

Everything, even inside single "application", going slightly off the reservation. Services that would die in stupid ways. Painful configuration that would have been abstracted out were I running containers on k8s (some benefits might be realized with Docker compose, but docker on its own is much more brittle than k8s).

So much SSH-ing to a node to tweak things. apt-get fscking the server. Etc.

Oh, and logging being shitshow.


To me it sounds like the latter, a classic server, which I agree... After getting comfortable with containerized deployment, "classic" servers are a huge pain


Fair enough, my point was more about using k8s to deploy applications rather than “house server” stuff, where it’s indeed unneeded more often than not.


Having zero downtime updates is quite nice. For example, I can set FluxCD to pin to a feature release of Nextcloud, and it will automatically apply any patch updates available. Because of the zero downtime updates, this can happen at any time and I won't have any issues, even if I'm actively using Nextcloud as it's happening.


> This is why I develop everything in Docker Compose locally.

For a small setup like this, just having a docker compose file in version control is more than sufficient. You can easily leverage services someone else has set up, and the final config is easy to get going again if you need to rebuild the machine due to hardware failure.


Reproducibility is the key reason I use Docker.

Some stuff is really tricky to setup too - like postfix with working TLS, DKIM etc. Before Docker I'd eventually get stuff like this working, then a couple of years something would break and I'd have no clue how to fix it because I hadn't touched it for so long. With Docker (and Compose and Swarm), everything is codified in scripts and config files, all ready to be deployed anywhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: