Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Having a consistent, managed experience with good top down controls is, in my world, far more efficient than tackling each service like a brand new problem to manage & operate independently.


I utterly fail to understand you. Please explain how configuring postfix is made easier by having a container.


You listed 21 different pieces of software, 21 different needs in your post.

For some reason almost everyone commenting here seems to think it's totally unreasonable to try to use a consistent, stable tool to operate these services. Everyone here is seems totally convinced that, like you, it's better to just go off & manage 21 services or so independently, piece by piece, on a box.

If it were just postfix, fine, sure, manage it the old fashioned way. Just set up some config files, run it.

But that's not a scalable practice. None of other 20 pieces of software are going to be managed quite like that. Tools like systemd start to try to align the system's services into a semi-repeatable practice, but managing configuration is still going to be every-service-for-itself. Trying to understand things like observability & metrics are going to be highly different between systems. It seems so past due that we start to emerge some consistent ways to manage our systems. Some consistent ways of storing configuration (in Custom Resources, ideally), of providing other resources (Volumes), of exposing endpoints (Endpoints). We can make real the things that we have, so far, implicitly managed & operated on, define them, such that we can better operate on them.

It's not about containers. It's about coherent systems, which drive themselves to fulfill Desired State. Containers are just one example of a type of desired state you might ask for from your cluster. That you can talk about, manipulate, manage any kind of resource- volumes, containers, endpoints, databases, queues, whatever- via the same consistent system, is enormously liberating. It takes longer to go from zero to one, but your jump from one to one hundred is much much smoother.


> but managing configuration is still going to be every-service-for-itself. Trying to understand things like observability & metrics are going to be highly different between systems

Literally none of this matters for a home server. I have a mail/web server that I haven’t had to change the configuration on since I last setup letsencrypt like 4 years ago. I don’t check metrics or have observability other than “does it work” and that does fine.

You’re caught up sucking in a bunch of technical debt preparing for something that simply doesn’t matter.


it takes less time to set up k3s & let's encrypt than it does to diy, under 30 mimutes.

for some people perhaps diy everything is a win, makes them feel better, but I intend to keep building, keep expanding what I do. having tech that has an actual management paradigm versus being cobbled together makes me feel much better about that future, about investing myself & my time, be it a little bit of time, or more.

i've done enough personal server moves to know that the old school automation i had, first puppet, then ansible, is still a lot of work to go run & coax back into action. but mostly, it just runs, leaves me with a bucket of bits, doesn't help manage at all.

> simply doesn’t matter

lot of ways to think about our cputomg environments and I am not in the "simply doesn't matter" camp.

maybe that applies to lots of people. they should take a spin at Kubernetes, i think it'll do an amazing amount of lifting for them & you can be up & running way faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: