Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> All of that to deploy a bunch of standard applications in a single server, with very low load

> So now, not only do you have to worry about the applications themselves and their configuration, but you also have to worry about all the layers on top of them.

The biggest benefit of K3s here imho is not scaling or performance but to have a standardized API to deploy standardized packages/containers/deployments (Docker/Helm) to. So instead of configuring and maintaining one of the many Linux flavours (and all those layers) out there I have one standard system to worry about.

I have a K3s Rpi cluster running for a few months now. Setup was trivial and maintenance as well. If I have a problematic node I just remove it from the cluster, reflash the SD card with a new install of K3os and put it back in. No state or further configuration to worry about.

All my previous homelab setups where either hand-crafted snowflakes or configuration managed by some tool (Puppet, Salt or Ansible). Each comes with its own problems but they all have the problem that they accumulate state over time and become to hard to manage.



But you have a cluster, which is already different from a single node. There I see how a layer to manage several nodes starts making sense. But when you're managing a single server, you already have a standardized API to deploy standardized packages (APT/YUM + SystemD probably). It's just a different one. Of course when you deviate from that it starts getting messy, but that happens with everything.


I also have a single node 'cluster' for the more bulkier (diskio) stuff that won't fit the raspi's. And it's nice to have the same API accross all my setups. Apart from the data (backups, photos, media, etc) that is stored on the disk there is nothing of state worth saving on the node. If my root disk crashes, I just install K3s again and apply all configurations from the yaml files on my workstation and K3s pull everything back up as it was before.

As you said Linux does offer standardized packages, but they are not applications/deployments. To get them running, beyond installing the binary, still requires a lot of configuration. Nginx (proxy, tls), a database maybe, storage/LVM, firewall, etc. So you quickly run into tools like Puppet and Ansible to manage this. The disadvantage is that they don't reverse the changes they make. So if you want to try something out and deploy it with Ansible, there is no easy trivial way to undo it except for reverting all the changes individually. Also there is always the tempation to quickly tweak something by hand, forgetting to commit it to CM.

With a system like K8s, everything (container, volume, ingress) is a declarative configuration and the 'system' works to converge to the state you declared. If you delete something, K8s will revert all changes. So there will be no lingering state or configuration left. Making everything way more managable imho.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: