Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not deploying; this is the server. I do backups, and I keep config in git.

Reproducibility? This is the server. I will restore from backups. There is no point in scaling.

If you want to argue that containerization and VMs are portable and deployable and all that, I agree. This is not a reasonable place to do that extra work.



Hey, do whatever floats your boat. Nobody said there is a single solution to every problem.

Don't pick up a fight because you are satisfied with your solution that is different from somebody else's solution.

I personally like docker-compose and Vagrant for my private services and development environments.

I use Vagrant for when I need a complete VM. Think in terms of VM for doing embedded development where I need a large number of tools in very specific versions and I need them still working in 3 years without maintenance even if I change a lot about my PC setup (I run Linux everywhere).

I create separate Vagrant for every project and this way I can reinstate complete environment at a moments notice, whenever I want.

I use docker-compose for most everything else. Work on an application that needs MongoDB, Kafka, InfluxDB, Graphana and so on and so forth? Docker Compose to rule them all. You type one command and everything's up. You type another and everything's down.

I use the same for my other services like mail, NAS, personal website, database, block storage, etc. Containers let me preserve the environment and switch between versions easily and I am not tied to the binary version of the Linux on the server.

I hate it when I run a huge amount of services and then a single upgrade causes some of them to stop working. I want to be able to be constantly updated and have my services working with minimum maintenance. Containers let me make decision on each of the services separately.


>I'm not deploying; this is the server.

Err, that's the very definition of deploying. Putting stuff on "the server".

What you mean is not that you're not deploying, you're not testing/staging -- you change things and test new stuff directly in your production server.


> Reproducibility? This is the server. I will restore from backups.

To me, reproducibility is more than about restoring the old bucket of bits I had. It's about understanding, about being able to reproduce the means that a system got the way it is.

With Kubernetes, there is a centralized place where where the cluster state lives. I can dump these manifests into a file. The file is human readable, well structured, consistently structured, uniformly describes all the resources I have. Recreating these manifests elsewhere will let me reproduce a similar cluster.

The resources inside a kubernetes cluster are just so much easier to operate on, so much easier to manage than anything else I've ever seen. Whether I'm managing SQS or Postgres or Containers, being able to have one resource that represents the thing, having a manifest for the thing, is just so much more powerful, so much better an operational experience than either having a bucket of bits filesystem with a bunch of hopefully decently documented changes over time on it, or a complex Puppet or Ansible system that can enact said bucket of bits. Kubernetes presents high level representations for all the things on the system, of all shapes and sizes, and that makes knowing what I have much easier, and it makes managing, manipulating, replicating those resources much much easier & more straightforward.


Wrapping a new abstraction layer around a single server does not help, it is an expense you do not need. "Recreating these manifests elsewhere" will not work, because there is no elsewhere.

You cannot add complexity to a system to make it simpler.

You cannot abstract away the configuration of a system when there is only one system: you must actually do the configuration.

There is no point in having a high-level representation of all the things on the system: you have the actual things on the system, and if you do not know how to configure them, you should not be running them.


> There is no point in having a high-level representation of all the things on the system: you have the actual things on the system, and if you do not know how to configure them, you should not be running them.

> You cannot abstract away the configuration of a system

I've spent weeks setting up postgres clusters, with high availability, read only replicas, backups, monitoring, alerting.

It takes me 30 minutes to install k3s, the postgres operator, & recreate that set up.

Because there are good consistent abstractions used up & down the Kubernetes stack. That let us build together, re-use the deployable, scalable architectures of lower levels, across all our systems & services & concerns. That other operators will understand better than what I would have hand built myself.

> ecreating these manifests elsewhere" will not work, because there is no elsewhere.

It'll work fine if you had an other elsewhere. Sure backups don't work if you have nothing to restore on.

Dude this is such a negative attitude. I want skepticism, criticalness, but we don't have to hug servers so close forever. We can try to get good at managing things. Creating a data plane & moving our system configurations into it is a valid way to tackle a lot of management complexity. I am much relaxed, many other operators are, and getting such a frosty negative "nothing you do helps" dismissal does not feel civil.


Not sure about reproducibility. If the HD fails, sure, restore from backup. But what if the motherboard fails, and you buy/build a completely new machine. Does a backup work then, even if all the hardware is different? That's where a container makes restoring easier.


A container does not make restoring easier in the situation you have described.

The host for the containers still needs to be configured. That's where changes to NIC identifiers, etc. need to be handled.

In my situation, the host gets exactly the same configuration. The only things that care about the name of the NIC are a quick grep -r away in /etc/; 95% of everything will be up when I get the firewall script redone, and because that's properly parameterized, I only need to change the value of $IF_MAIN at the top.


On Windows platforms that's usually true.

I've not met a linux system tarball that I can't drop on any other machine with the same CPU architecture, and get up and running with only minor tweaks network device names.


> Does a backup work then, even if all the hardware is different

Full disk backup, Linux ? Most likely. We rarely recompile kernels these days to tailor to some specific hardware, most are supported via modules. It could be that some adjustments are going to be necessary (network interface names? nonfree drivers). For the most part, it should work.

Windows? YMMV. 10 is much better than it was before and has more functional disk drivers out of the box. Maybe you need to reactivate.

The problem is mostly reproducibility. A system that has lived long enough will be full of tiny tweaks that you don't remember about anymore. Maybe it's fine for personal use but it has a price.

Even personal servers (including Raspberry Pis) I try to keep some basic automation in place so if they give up the ghost, they are cattle. Not pets.


As someone who just switched motherboard + cpu on my home server, the worst thing was to figure out the names of the network interfaces.

EnpXs0 feels worse than good old ethX with interface naming based on mac addresses.


It feels worse, but you'll be even less happy when you add or remove a NIC and all of the existing interfaces get renamed.

But if you really want to, you can rename them to anything you want with udev rules.


But this is happening with enpXsY! (Are you saying this was only happening with eth*?)

Whenever I remove a card from my first pci slot, the other two cards get renamed... Frustrates me a lot.


Why wouldn't it? Unless you are changing architecture after a component of your system dies, there's no reason your old binaries would not work.


Drivers, config you missed/didn't realise was relevant/wasn't needed before, IDs (e.g. disks), etc.

Nix or aconfmgr (for Arch) help.

I still like containers for this though. Scalability doesn't mean I'm fooling myself into thinking hundreds of thousands of people are reading my blog, it means my personal use can outgrow the old old PC 'server' it's on and spill into the new old one, for example. Or that, for simplicity of configuration, each disk will be (the sole disk) mounted by a Pi.


There's more than one way to skin a cat. If you're running something as simple and low profile as OP suggested, all you need to backup from the system are the packages you installed and a handful of configurations you changed in /etc. That could be in ansible, but it could be just a .sh file, really. You'll also need a backup of the actual data, not the entire /. Although, even if all you did was backup the entire / there's a good chance it would work even if you try to recover it in new hardware.

The services metioned by OP don't need to talk to each other, they are all things that work out of the box by just running apt-get install or equivalent. You don't need anything really fancy and you can set up a new box with part of the services if they are ever taking too much resources (which, for a small setup, will likely never really happen. At least in my experience)


Why do you feel the need to keep config in git if you've got backups? I think the answer to that is the same reason that I'd rather keep a record of how the server is customised than a raw disk backup.

I do think containerisation and VMs are more overhead than they're worth in this case, but there's definitely a lot of value in having a step-by-step logical recipe for the server's current state rather than just a snapshot of what's currently on the disk. (I'd favour puppet or something similar).


I keep config in git so that when I screw up, I can figure out how. What else would I be doing? Merging conflicts with my dev team?


> I keep config in git so that when I screw up, I can figure out how.

Right. Which is exactly why I want installing a new service, upgrading a library etc. to be in git rather than just backing up what's on disk. A problem like not being able to connect to MySQL because you've upgraded the zoneinfo database, or the system root certificates, is a nightmare to diagnose otherwise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: