Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> starting services - daemontools

Daemontools is great, but please don't pretend it's a proper general-purpose init system. How do you start a certain service before another one? I suppose you can hack that inside the run script but then you're yet again duplicating logic, badly.

> journaling - daemontools

What? Daemontools provides a nice utility for logging a process stdout/stderr to a file. That's it. That doesn't come close to the problem that journald tries to solve.

> billion different sentinel/watch type apps - daemontools.

This is about the only thing that daemontools solves.

> dependencies - just don't do it. This is the old age of power sequencers again. It's coupling.

And here comes the mythical "coupling is bad" statement while completely ignoring the problem. No, I don't want my NFS daemon to be started before my network connection is up, thank you very much.

Your post shows a complete lack of understanding of the problem.



> Daemontools is great, but please don't pretend it's a proper general-purpose init system. How do you start a certain service before another one? I suppose you can hack that inside the run script but then you're yet again duplicating logic, badly.

I'm not saying its a replacement. The model is suitable as a replacement i.e. inspiration should be taken from it.

What? Daemontools provides a nice utility for logging a process stdout/stderr to a file. That's it. That doesn't come close to the problem that journald tries to solve.

It guarantees that the logs are stamped externally, are committed to disk and that the logs haven't been tampered with by isolating them from other processes and user accounts that can modify or write to them. Don't need journaling on top, just separation of concerns.

Indexing - don't need it. No one does. That's an external problem solved by syslog collectors like Splunk. As someone who deals with up to 500Gb of logs a day, I know my shit. Systemd doesn't solve a thing here. In fact it adds overhead to a solved problem.

And here comes the mythical "coupling is bad" statement while completely ignoring the problem. No, I don't want my NFS daemon to be started before my network connection is up, thank you very much.

Well what the hell does your NFS daemon do when your network goes down, your adapter gets hotplugged after a failure or someone falls over the cable? There is so little of the problem solved by systemd it's unbelievable. The RIGHT solution is to make the processes resilient to failure conditions like this.


> It guarantees that the logs are stamped externally, are committed to disk and that the logs haven't been tampered with by isolating them from other processes and user accounts that can modify or write to them. Don't need journaling on top, just separation of concerns.

What if you want to consolidate all your logs in a single place instead of scattered over many files? What if you want the logs to be shipped to a remote host without storing them locally?

If your answer is syslog: welcome back to the problem.

> Well what the hell does your NFS daemon do when your network goes down, your adapter gets hotplugged after a failure or someone falls over the cable? There is so little of the problem solved by systemd it's unbelievable. The RIGHT solution is to make the processes resilient to failure conditions like this.

That has got nothing to do with the init system. Just because the services themselves can recover from dependencies that temporarily go down, doesn't mean that I want to see tons of useless error messages during startup. I want my NFS daemon to start after the network is up, so that it doesn't bother me with useless "network is down" messages. Recoverability is a completely independent (though desirable) property.


Its not different as both can and must be solved by network daemons listening for network change events. So solving reliability solves the startup issue.


> Indexing - don't need it. No one does. That's an external problem solved by syslog collectors like Splunk. As someone who deals with up to 500Gb of logs a day, I know my shit. Systemd doesn't solve a thing here. In fact it adds overhead to a solved problem.

Does systemd's journal still use a binary format? That is a no-go for me. Inevitably it will get corrupted, and then you rely on the journal reader being able to recover your logs, or fix bugs in it until it does.

I'd be more willing to consider systemd as an alternative if it kept using a human-readable log format, that can be read with cat/tail in a worst-case scenario. If it wants to index it, it can keep an index on the side, no big deal if that gets corrupted/out of date, as it can always be rebuilt.


Yep. It still uses a binary format. An ugly fucker of one too. You can have it pipe to a real syslogd, but at that point, I'd rather just have it send to one and get that ugly monstrosity that is systemd out of the way.


I don't buy this argument. SQLite and PostgreSQL's databases are binary too, but that's ok?


A database can afford to fsync to ensure consistency, and they've been heavily tested to ensure it works properly. And even then its still possible that the file becomes corrupt, if the OS, or the disk lies about fsync: https://www.sqlite.org/howtocorrupt.html

Has journald been tested on how well it copes with sudden reboots, kernel panics, powerloss, etc.?

When something goes wrong you usually want to be able to still read your logs to figure out what happened, and you may not even be able to boot the system properly.


> Has journald been tested on how well it copes with sudden reboots, kernel panics, powerloss, etc.?

Are you assuming it hasn't been tested in these situations just because you don't know?


Databases have been around a lot longer than systemd, so I assume they are better tested than journald in this regard. I don't use systemd - because Debian doesn't use it (yet) - so it was rather a question for those who do use systemd.

I'm not saying that I'd be happy to have a database as journald backend, I'd be just less concerned.


OK, I see. Thanks for the clarification.

systemd's test suite[1] doesn't seem to cover those cases anyway.

[1]: http://cgit.freedesktop.org/systemd/systemd/tree/test


I think it has not been tested in these situations enough. It's not in production anywhere major yet.

The assertion is valid IMHO.


I wouldn't put my system logs into either of those.


> Well what the hell does your NFS daemon do when your network goes down, your adapter gets hotplugged after a failure or someone falls over the cable?

Let's say it crashes or hangs. In that situation, systemd will notice it, clean up, notice that network is not up, wait until it is, and restart it.

Half of what makes systemd so great is that it allows the daemons of the system to be much less robust than it is, and do the right thing so that the entire system will remain robust.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: