Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenBSD from a veteran Linux user perspective (cfenollosa.com)
84 points by mulander on June 28, 2015 | hide | past | favorite | 60 comments


Thanks for taking the time to write this up.

Got a few CentOS 6.x boxes under my command at the moment. At the same time I've got a NetBSD/OpenBSD background. I ran NetBSD for years on Sun SPARC kit and thoroughly enjoyed it. This eventually rolled onto OpenBSD because TBH it just works (to a point) and everything is easy to find.

However I end up with CentOS every time when it comes to rolling out something professionally in production. Why is this?

1. The amount of information on how to solve even the most complicated problems is a Google away every time. Sure I solve most problems from my head but when I've got a CIFS mount that dumps stack, there's an answer there in 30 seconds.

2. I can just leave CentOS to it for a decade and yum update it as required. No PITA world changers every 6 months as a new release drops.

3. The OS and the packages are considered as one singular concept. I wasn't a fan of this idea initially but the fact you can drag your kernel and any part of your userspace up from the same source is really cool. There's only one update mechanism to consider. This is stupidly convenient when you have Ansible in the picture for example.

4. IO perf, particularly on SSDs is 2-3x better on Linux on the same kit (HP DL380 gen 8, Samsung 845 DC PRO).

As a side issue, both are crap on the desktop so I'm sitting here on Windows 8.1...


> As a side issue, both are crap on the desktop so I'm sitting here on Windows 8.1...

Which just goes to show how we all are different. Windows 8.1 was what pushed me to move my main laptop (also used by wife, etc) to Ubuntu.

It has worked great, and just yesterday I discovered Linux automatically handles (SANE) scanners network-transparently via saned. I had no idea. Connect scanner to server and start scanning applications (including scripts) on the laptop. It just works. With zero configuration. Try that on Windows!

I'm literally finding Linux on the desktop the greatest thing ever these days.


Linux on the desktop isn't terrible until you hit an edge case to be honest. Typically for me, it's printers and power management.

I have a wireless scanner and printer combo (HP 2450). To set this up on windows, I turned it on, pressed the WiFi button and the WPS button on my router and File -> Print and/or open up Fax and Scan and that's it. Just works. No setup for scanning or printing. On linux, 30 minutes arguing with hplip and the output looks like ass whatever switch you flip and SANE doesn't even see it.

Then there's PM. On my 9-cell Lenovo X201, 8.5 hours on windows 8.1. I managed to nab max 5 hours out of every Linux distro I tried (Ubuntu, Debian, CentOS) with powertop tuning. The cruel irony is that CentOS gets better battery life in a VM in Windows than it does on the bare metal.

YMMV as they say but I really can't be arsed with anything that gets in the way of doing stuff these days. Tuning a Linux distro was fun about 10 years ago for me. Not any more.


Funny thing, I've had the opposite experience with Windows 8.1 vs. Ubuntu.

Printing to my 6 year old Ricoh color laser proved a difficult task, tangling with driver hell. It seemed unnecessary given the printer is equipped with PCL and Postscript emulations, but Windows didn't care about those standards. The only way it would work is with the crappy Ricoh driver forcing the sacrifice of some basic functions.

OTOH under Linux, gutenprint drivers worked even without altering the default settings. Maybe it was easier because I was more familiar with the CUPS setup, nonetheless the difference was noticeable.

I'd concede that newer printers might be easier to configure for Windows vs. Linux or other OS, but it's troublesome that perfectly good equipment becomes "obsolete" when a few years old. In that respect Windows can be a disadvantage.


You are right. Windows is better with newer hardware, Linux is better with older hardware.

I've had nothing but trouble when trying to install Ubuntu on brand-new laptops, but the exact same laptops run Ubuntu perfectly well a couple of years later.


Consumer devices are just targeted for Windows. That is the explanation. I had similar nightmares getting a mainstream brand consumer printer/scanner (Epson I think, but not sure) working wirelessly with Mac OS X. Never succeeded. It's a result of decisions by the manufacturer, not any lack of capability in Linux.

In MANY other ways I find Windows to be a constant, not just edge-case, constraint on my productivity. So I don't use it. To each his own of course.


Disagree. The HP 2540 works fine with the wife's MBP on 10.10. It just appears and you print to it. Same with windows. Same with iOS/AirPrint as well - it just works.

They're $70 in the US / £30 here for an all-in-one scanner, printer, copier combo so this is rock bottom cheap ass hardware and it works flawlessly.


>power management

Are you running TLP? It's easily the best power management tool out there and it's a big reason I'm not switching to a BSD.


I've generally only run with thinkpads, but I find OpenBSDs ACPI to be measurably better than everything else on power management.


I run Gentoo on my Dell Inspiron 5520 and it took only a minimal amount of configuring (holy moly!). The most difficult part was audio and touchpad drivers, which took my google-fu to another level. Otherwise, wow is Gentoo fast if you configure it correctly!


Last year I bought a asus n-550jv with win 8.1 and after a few months started having problems with wifi, the keyboard, bluetooth and a general slowdown. Now eveything I tried only yielded minor results... So at one point I decided to try linux for the first time, I installed Ubuntu 14 and voila! Half my problems were gone, and over the next few months I was able to fix most of the other problems as well. On the other hand, I still havent been able to do anything meaningfull with windows.


I don't see how many people have so many problems with windows. I use a clean install on just about anything and it pretty much always just works and does so for months.

Stay away from anything Broadcom, AMD, Radeon and pick a decent SSD (Samsung 840 pro here) it's bomb proof.

The only playing around you have to do is on hardware that is way newer than the windows version and the network interfaces aren't supported.


So what you're saying is... Windows works fine if you shop hardware from known good vendors and nothing too bleeding edge? That sure sounds like what people said about Linux a decade ago. ;)


Actually it works on anything out of the box but you might have to source some drivers for network cards so you can get to windows update for everything else.

The best bit is on my older X201, it installs all the Lenovo official drivers as part of windows update. You install windows, wait about 15 minutes, then reboot when it tells you to and bam, sorted.

That doesn't sound like Linux a decade ago ;)


It does actually, package managers in linux update all of your drivers along with you programs for more than a decade now...


Is there any Linux or BSD that is good* on the desktop for you? Just wondering!

PS: There appear to be a number of CentOS boxes that have been left for more than a decade judging by the thread on CentOS Forums about patching a CentOS 4 server for Heartbleed...


I can recommend Fedora and Arch... but "good" is a pretty individual thing. I measure by ease of maintenance and breadth of repository. Some people measure by WM/DE which is strange to me because you can replace that in about 30 seconds with one command.


Xubuntu is close to perfect for me, fast, stable, has everything I want, they don't stop the boat mid ocean to redesign it.

Used it for years and have basically no complaints.


Just wish Gnome would lighten up on their GTK chokehold.


Agreed, I wish GTK was isolated from Gnome like Qt is from KDE but I can't see it happening.


Ubuntu pre-Unity was pretty good.

Yeah we have two CentOS 4 boxes. They aren't connected to the internet just an old fashioned DUP to receive and send text from a client's mainframe and wrap it in a SOAP request and post to an internal host on an internal LAN. One of them had an uptime of 6 years until our UPS blew up.


Have you tried recently? The first few versions of Unity were awful - but I tried again (Trusty, 14.4), and whoa - realized I like it better than the Mac and Gnome2 I'm still using occasionally.

It took a while to take off, and I still meet the occasional bug - but it's extremely usable right now. YMMV.


Yeah, once you turn off the internet search crap, Ubuntu is pretty comfortably the least terrible desktop Linux distro.

I simply don't have the time these days to configure X and everything from scratch, and Ubuntu mostly manages to take care of that for me. Then it gives you a desktop environment which looks a lot like my OS X setup, and is perfectly fine at doing basic things.


Thanks for the heads up - I havent tried since 13.04. Will have a bash in a VM today.

I've not got on with OSX since 10.5 TBH so I'm with you there plus I'm allergic to MBPs for some reason; end up with bad rash on palms.


I use OpenBSD on the desktop and am perfectly happy with it. Does everything I need it to do. That does not mean it's right for everybody, of course.


Hi, OP here.

This is actually the second revision of the text; I got some awesome feedback from other OpenBSD users and tried to improve it. I’ll be happy to hear your opinion and fix any errors that may still be on the text.

This is my first time with a BSD and its idiosyncrasies. The idea is to create a guide for former "GNU userland" admins and help them jump to BSD or, at least, have a more informed opinion before making the jump. The post will be further updated since I've been receiving more emails :)


Completely off topic, but I love your cookie notification.

"This website uses third party cookies exclusively to collect analytics data. If you continue browsing or close this notice, you will accept their use. The EU now requires all sites to display this banner which confuses users and does nothing, actually, to improve your privacy."


Yeah, it was quite sad to read on HN how non-EU users complain about getting the same notification which has no legal (nor technical) effect on their computers.

That's the price of having completely tech ignorant legislators, I guess. I also took some time writing a small javascript code to handle it, in case you're in the EU and need to wait for user confirmation to set the cookies: https://github.com/cfenollosa/eu-cookie-law


I use FreeBSD instead of OpenBSD, but for your next revision, I'd recommend symlinks for your mount problems with build directories. Before I switched over to ZFS, I would always just create a /home/ports and symlink /usr/ports over to it, same with /usr/src and /usr/obj. Bit ugly, but it works.


This is a good suggestion. OP could also mount /usr/ports, /usr/obj, etc. via NFS from a dedicated build machine, and just pkg_add the built packages from /usr/ports/packages/, or 'make install' precompiled errata from /usr/src/. Having a dedicated build machine also keeps all the build dependency stuff contained elsewhere, and makes it easy to update multiple machines.


Thanks for the suggestion. I might end up doing that, since I can't compile some of the biggest ports (php-5.6 right now).

This seems to be a common newbie problem. Expert users change the default partitioning on install, knowing they may need to compile some big port, but I think 2GB isn't a safe default for those people who will precisely be using the defaults since they don't know any better.


I agree with your point about the default partition layout providing too little space for the src trees.

I ended up editing the partitions on the second install.


I don't know why this has not been fixed. It's been the case for a while that the default /usr partition is too small if you intend to do kernel and userland compiles and keep up with errata in the stable ports tree. Maybe it's that there is an assumption that most users will not do this (they do discourage it, but it's the only way to apply errata). I try to remember to change the default layout on a new install; another option is to symlink /usr/src to a directory on a larger partition.


I started a discussion on the mailing lists trying to suggest a default /usr size to 5GB, maybe you can contribute :)


The differences between GNU and UNIX behavior can be substantial. I used a SunOS system for a few terms in college; I used Debian to get work done, but for a few things we had to make sure our code compiled and ran on the SunOS system.

One of the first things I noticed in the short while before I put the GNU tools at the front of my path: the SunOS tools wanted all options before all other arguments. So if you have "ls something", and you hit up, space, -l, enter, (or "!! -l" if you prefer) then instead of the long listing you expected, you get the same short listing as before, along with an error like "-l, no such file or directory".

It's minor, but it's one of those things that adds up when you're used to more capable tools and find yourself in a less capable environment.

OpenBSD doesn't necessarily suffer from the same deficiencies (I certainly don't know if they have that one or not), but when you're used to coreutils, any other tools can be a shock, and not typically in a good way. The same goes for environments like busybox, but at least there it's for a good reason: size constraints.

I'd be curious to hear examples where the reverse is true: are there instances of the standard command-line tools available on other UNIXen being substantially better than the GNU userspace tools?


That depends on how you define "substantially" and "better". Consider cat.c from gnu [0] and openbsd [1]. I'd say that openbsd's implementation is "substantially better", because:

1. it does everything I need in 194 LOC (vs 488 LOC)

2. the code is so understandable that it needs no commenting (and I'm not just saying that because it has no comments, it is braindead simple code)

3. there are no ifdefs and the program state is much more shallow

Now obviously that is from the perspective of a programmer, but it has been my experience that if the code base sucks for programmers - then things suck for the users as well (surprising behavior, security issues, long lag in bug fix, etc). I once spent a day chasing a bug related to autotools and colorized output from grep... sometimes the bells and whistles get in the way of actually getting things done.

[0] http://git.savannah.gnu.org/cgit/coreutils.git/plain/src/cat...

[1] http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/bin/cat/cat.c?r...


While I appreciate the value of a cleaner or simpler codebase, I can't place as much value on that as I do on usability, unless the code is utterly unmaintainable beyond all hope of redemption. And whatever you might think of the extra complexity of the GNU code, for good or ill, it certainly isn't that.


Again, a lot of words with definitions that intelligent people can disagree on. This is why there are so many different operating systems... and implementations of echo.c. So to answer your original question more directly : yes, there are many instances of the standard command-line tools available on other UNIXen being substantially better than the GNU userspace tools. In my opinion. You may disagree, but who is to say?


bsdtar's really nice. It's more a tar-flavour wrapper around libarchive, and thus has support for many formats:

Read: tar, pax, cpio, zip, xar, lha, ar, cab, mtree, rar, and ISO images.

Write: tar, pax, cpio, zip, xar, ar, ISO, mtree, and shar.

I believe it had tar compression detection significantly earlier than GNU tar too.


Interesting example, thanks!

There are generic tools like "unar", but it's interesting that in the BSD world tar grew those features instead. An odd counterpoint to the accusations of "bloat" often leveled at GNU.


The other interpretation is that instead of reinventing the wheel purely for tar, it's all been factored out into a standardised library anything can use for generic archiving tasks.

e.g. instead of:

    440K gtar
    151K gcpio
    144K unzip
Base has:

    655K libarchive.so.6
    57K  bsdtar
    37K  bsdcpio
    20K  unzip
34K bigger, much wider format support, and directly usable by other tools like pkg and ar, as well as third party apps like tarsnap and cmake.


ifconfig works in the BSDs and can control all kinds of interfaces, rather than being a broken deprecated tool. Obviously that is not entirely comparable as there is ip.

Generally i dont notice much difference - you are comparing to old Unixes like SunOS that had very limited commands. AIX is like that too.


Also, most if not all the gnu variants of common utilities are in ports if you need them.


I have to admit to never having installed any extra GNU stuff except gmake in a BSD, everything else just works...


> The base system config files are properly centralized in /etc, but not the ports.

OpenBSD generally stores ports configuration in /etc as well, however; some software has unique directory layout requirements and/or may be chrooted in /var. In that case they need access to their config files.


> For example, ext4 is officially supported read-only but in my case it didn't read some folders properly.

For some reason, I've thought ext4 is "pervasive" or "fundamental" until now. I assumed it to be readable by most systems. So it came as a surprise that OpenBSD could not correctly read a ext4 filesystem. But thinking again, last time I checked, Linux could not write to a HFS+ filesystem, either. OpenBSD's FFS might also not be not supported. So a BSD not supporting the Linux filesystem is very natural.

Probably one of the "greatest common divisor" filesystems, which are supported by all major operating systems, should be FAT32. Which is a shame, as it isn't neither an open standard nor a thing from the Unix culture. It also lacks journaling support, which I consider essential. Any alternatives?


FAT32 is an open standard AFAIK, however it's extensions (eg long file names) and forks (eg exFAT) are patented. But in fairness, FAT32 isn't much use without support for long file names.

Interestingly, some FAT32 forks do support journalling. Sadly those tend to be the patents Microsoft are the most proactive in upholding.

As for an alternative, ZFS is supported on FreeBSD, Linux, Solaris and OS X - so that's one option. Albeit it's not a great option in this specific criteria. ext2 receives pretty good support as well, even on Windows. However ext2 doesn't support journalling. Another option is to run ext3 as that gracefully downgrades to ext2 if no ext3 driver is available. Sadly NTFS is probably the most ubiquitous journalling file system. ntfs-3g - which are actually pretty decent drivers - has been ported to quite a few platforms.


The FAT32 long file name patents are expired. That's a big part of why exFAT exists.


UDF. It was made for optical media, but works just as well on hard drives. About the only caveat I've found is that you have to be pretty particular regarding what format you choose, but I've found that Windows' format for UDF tends to make the most compatible disks.


2015 era command line Rosetta Stone, including OpenBSD and current Debian:

https://certsimple.com/rosetta-stone

Also: SmartOS and FreeBSD.


It's kinda strange that he spends so much time documenting his old-school linux admin rep, then really complains about building from source and not being able to just apt-get upgrade. Building from source is the way we used to do it.


But one does get used so quickly to the nice things... :) That's the main reason that put me off Slackware and Gentoo. Package managers are a great improvement.

Think about it: we also spent a lot of time configuring drivers, and I think we all agree that hardware autodetection is something to be desired in 2015, isn't it? That was my thought with compiling from source. Not that there is anything wrong with it, I just found it very weird because I didn't know that was the usual way to go on BSDs


That depends on the BSD. FreeBSD does have binary repositories and a pretty decent package manager (pkgng). PC-BSD is basically a distro of FreeBSD, so would also have the same; as does DragonflyBSD, which is a much older fork but these days a standalone BSD in it's own right.

NetBSD still uses pkgsrc which does support binary packages but the command line tools are a little less intuitive in my opinion. Incidentally, a few other UNIXes also support pkgsrc.

You may already be aware of this, but it's worth noting that the different BSD's are more like separate OS's, rather than distributions of the same OS like you see with Linux. eg DragonflyBSD, despite being a fork of FreeBSD, has some quite significant kernel changes (different schedulers, file systems, etc). And it's a similar story with NetBSD and OpenBSD too - though they're forked from other, much older, BSDs.


That is the nice thing about having a ports tree. On freebsd I've got a jailed instance of poudriere that automatically builds binaries of the ports that I'm interested in with the compile flags I want. Having a local build server is pretty awesome and makes a lot of things that would otherwise be difficult - very easy. For example, samba4 needs some compile options flipped on in bind that are not on by default in the binary package (bind isn't the default dns server in samba4). Just add the needed flags to the options directory in poudriere, and now everybody who cares to grab a binary package from the build server can have it - and it is always up to date. Of course the same thing can be done from the ports tree itself, per instance, but a build server starts to make sense after about five machines.

This is actually what got me to switch from linux to freebsd (though had I been around - systemd would have caused a ragequit), after I started running into problems related to how hairy my ~/bin and ~/lib directories were getting. Hopefully there is a nice linux solution that I just overlooked at the time, but I'd be surprised if the awesomeness of /usr/ports is approached.


If you follow -current it's pretty easy. Upgrade from a snapshot, run sysmerge(8), and then update any packages using pkg_add -u. Just be sure PKG_PATH points to the same snapshot. The risk with following -current is that things are sometimes broken. I've only experienced it rarely, and it's usually fixed quickly, but it can happen. Also it's a sort of "don't look back" decision. The only supported way to go from -current back to -stable is to reinstall.

Following -stable and keeping up with security errata is where you run into needing to compile from source. It would be nice if that were easier.


Building from source was tedious back then and it still is today.


> In a few years we've gone from /etc/init.d/sshd restart to service sshd restart to systemctl start sshd.

Aw man, I just got used to the second one!


OpenBSD grows and changes, and its rc.d(8) is no exception!

man 8 rcctl


"But this time I didn't want to use a Linux installation which wants me to reboot every 5 days because of some critical patch. I'm looking at you, Ubuntu."

Critical patch... I think you mean kernel update and you don't have to perform them. A critical kernel patch that requires you to restart every 5 days would be ridiculous.


For anyone who reads this and is put off by the thought of having to update their system from source; there are binary patches available (both OS and packages) using the `openup' utility from https://stable.mtier.org/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: