Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's interesting to think about the way these things evolved. Imagine if, over time, the `compress` utility got -gz, -bz2, -lzma, etc. flags, and `gzip`/`bzip2` were all converted to deprecated shell scripts. When is the right time to consolidate variants under one program (and deprecate the variants), and when is it better to let small utilities keep doing one thing well(TM)?

I see people talking about removing the compatibility scripts being driven by a sense of purity, but wasn't it that same sense of purity driving someone to collapse `fgrep` and `egrep` into `grep` in the first place? The sense that these are all just variants of the same goal, and thus should be flags to the same program? Why bother combining them if not to ultimately remove the "extra" programs one day?

I'm not sure what the right answer is. On one hand, I like the idea of a smaller namespace of programs with a larger set of well-documented switches. The alternate universe where `compress` covers the widely-used variants of compression and the variant utilities fell away over time sounds kind of nice. Or imagine if early UNIX had arrived at a "plugin" model, where top-level programs like grep should have pluggable regex engines which can be provided by independent projects? The culture we have, of tiny independent projects, will always make consolidation and deprecation messy events.



The original UNIX design philosophy was very much one-command for one-thing.

=> http://harmful.cat-v.org/cat-v/ UNIX Style, or cat -v Considered Harmful

But that argument is an argument against pretty much all command-line flags. Rob Pike argues `ls` shouldn’t have an option to split the output into columns, you should pipe ls into a column-splitting program.

So the pure version of that philosophy is long-gone. It makes this current effort seem rather arbitrary and meaningless.


As someone who learned about regular expressions before extensively using grep, I found grep to be quite unintuitive, since extended grep is what I am conceptually thinking about from a “theory” point if view. One difference between grep and compression is that grep is bounded by regular languages, but compression is more free form. It’s conceivable to therefore view grep as mature enough of a technology to be finished once and for all, but compression will continue branching into many disparate algorithms.


Consider the popularity of ffmpeg and imagemagick. Putting the mainstream algorithms together under one interface seems to be an appealing model to a lot of people, even though new algorithms are constantly being developed in those areas.

Personally, for audio encoding, I'm much happier just installing ffmpeg than I would be gathering and learning a bunch of flac/ogg/opus/mp3/etc encoders.


Yes, those are quite different. But you also can’t pipe lossy compression data willy-nilly without compounding data loss, so it’s hard to imagine an alternative with unix-style tools.

The difference may also stem from those being application-focused rather than command line, so they don’t have a burden of maintaining legacy applications forever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: