There’s a lot of comments in here about desktops, but IMO why even discuss Linux on the desktop… 99.9999% of Linux deployments are not Arch installs on old Thinkpads. Immutable distros *are* becoming a de-facto standard for server deployments, IoT devices, etc. They improve security, enable easy rollbacks, validation of a single non-moving target for systems/hardware developers…
There’s also been a ton of very advanced development in the space. You can now take bootable containers and use them to reimage machines and perform upgrades. Extend your operating system using a Dockerfile as you would your app images:
after all the nixpkgs/nix leadership failures and having dozens of hours of packaging improvement work ignored by individual package maintainers and entire SIGs, I've been evaluating bootc and other declarative options and i'm quite disappointed that they've been completely uninterested in providing any declarative solution for per-host "state" with bootc systems -- having to set up a boot Dockerfile (sorry daddy shadowman i meant Containerfile!!!) and then also use Ansible and/or cloud-init on top of that to set up a new host is just a complete non-starter when NixOS can handle both in one language framework and development environment even if that framework is heterodox jank that everyone outside of nixpkgs resents.
Plus 1 on this - I've probably had direct responsibility for managing fleets of roughly 50,000 linux hosts - never seen an immutable distro. We usually just burn a fresh image of whatever mainline ubuntu is offering every week or two into the fleet. Saying that containers are becoming a defacto standard is reasonable though - pretty much every company I've worked with and my coworkers have worked with have shifted everything into containers (at least in companies with x00k microservice instances running on ~100k machine environments).
WSL2 + Ubuntu. It's either MacOS or Windows in BigCorp with all the Okta/Crowdstrike/... stuff they require - and I like the apt-get convenience.
Most of my time is in Tmux anyways. Over the last 15 years the client side has been one of MacOS, ArchLinux, Ubuntu, and now Windows/WSL2.
The real activity has shifted up to the orchestration and service discovery layers - nomad, k8s, consul, and whatever fleetmanagement/cluster management layers maintains the hosts (a lot of homebuilt + terraform + chef/argo-workflow in our world). It's been years since I was really that concerned about the linux host side of things - why care about "immutable" when the entire machine/image is ephemeral for < 1 week (or in some cases, <1 day) anyways?
That's probably because they are practically most used for the underlying OS of a Kubernetes host (seeing as how it is difficult by definition to configure an immutable install).
If you really think about it, what's the difference between spinning up a VM with a preconfigured image and spinning up a VM with an _immutable_ preconfigured image?
The difference is that one is immutable and the other is not. One can be rolled back to earlier version while retaining user data and the other doesn't offer that ability.
Despite the name, that's not what immutable distros are for.
GKE won't let yet restore previous generation of configured and component versioned base image.
Every professional programmer needs a desktop OS, and NixOS is really hard to beat. Switching to NixOS is like going from a car that is breaking down all the time to one that's reliable and easy to mod and repair. I don't recommend it to family members, but I do recommend it to programmers that care about their tools.
Of course there's many more Linux servers out there than there are programmers, but the OS the programmer uses to develop on is just as important as the OS they deploy to.
This is assuming that things break all the time, I've been using Ubuntu / Debian for the last 20 years I never had to re-install it because something broke.
Nowdays Linux is very stable, you can have things that don't work properly but you won't need a full re-install.
Every time I’ve tried to run a standard Linux distro like Ubuntu for more than a couple of years I inevitably end up breaking something in a way that I can’t recover.
I have had the same experience. Don’t run random commands from the internet, don’t install anything that doesn’t come from the distro vendor (a few very notable exceptions can be made for things like Docker if you really must), don’t mess with configuration files, do upgrades their way. Generally speaking you will have zero problems. Sometimes they will do something like switch from one network manager to something like netplan but overall that stuff is trending towards ease of use, not complexity.
If you install the newest versions of whatever from random repos or compile stuff yourself you are very likely to mess things up. But nowadays there is very little reason to do that. And you can pick a distro that releases at a pace you are comfortable with, so you have choices.
Don't use custom repos, use container technologies (e.g. Flatpak, Docker etc) to install applications, update the system regularly (at least once a week).
Usually broken distro upgrades I see are because people run "curl randomdomain.ck/totallysafescript.sh | sudo bash -" to install things or use custom repos.
I hate Flatpaks; they're bloated monstrosities and I only run them when I have no other choice. Outside of that, distribution package maintainers tend to do a good job and that is my preferred way of running programs.
container stuff breaks the MOST for me. The hooks into the subsystems invariably are not working correctly be it like xdg preferences or finding things that are global, its nice to package things into their own sandboxes but those sandboxes have not played well with my wider systems. I am still thankful for snap getting me recent copies of popular software on my aged debian installs however.
This is why I like Arch's Pacman a lot, and the reason why I avoid Debian derivatives.
That `totallysafescript.sh` could at least be inside of the package manager scope. Most of the times someone already did it, and published it to AUR.
IMO the reason why there are so many people running random scripts in Ubuntu/Debian is due to how more difficult/inconvenient it is to get a dpkg .deb when compared to a PKGBUILD file. Same for MacOS, in which you have to either rely on Homebrew wizardry or just running the script
> That `totallysafescript.sh` could at least be inside of the package manager scope. Most of the times someone already did it, and published it to AUR.
The AUR is still not as good as proper package management and shouldn't be considered a stable or reliable method of software distribution at scale.
This experience has been unique to (k)ubuntu (more than 15 years ago) for me.
I've been running rolling release distros for a decade and never had any problems - you have to follow some software migrations when needed, but I managed to migrate to systemd on Arch without an issue while any dist upgrade on ubuntu was wrecking my system.
It's not a good distro. I don't know why people insist on using it. Notice that the GP said Debian instead. (Probably Stable, because testing and unstable will break within 10 years.)
Agreed. I confess I assume I'm living in an alternative reality whenever I read folks talk about how hard it is to run Linux as a main OS. I have broken things, sure. I can't remember the last time, though. I have had issues trying to get CUDA working correctly. But even that hasn't been an issue in a long time, at this point.
My gut is that if I was to try and get my 3 monitor setup such that the seams are all pixel aligned, I would be in for a world of pain. I imagine that would be the same for other OSes, as well?
Very few people (among the world population) know what GNU/Linux is. Fewer care enough to switch to it. Even fewer know enough (or have the willpower, time and mental capacity to learn) to actually be proficient.
But among those who do, there are plenty of people who have learned Nix well enough it's no longer a weird arcane thingy that spews out incomprehensible errors for them. Although, I guess, among those no one will deny Nix can be better (but there are no multi-billion-dollar corporations spending tons of their resources on it).
It's like vim. First time you run it you probably can't even exit it - so, of course you think it's a disaster ;)
Can you elaborate? Why is it a disaster? I've only used Nix as a package manager when my work distro doesn't have some tools I wanted to install, but the few people I know that use NixOS seem to swear by it.
debugging and error messages are still hard to deal with. Also, flakes should become standard at this point. Documentation on how to load modules and explore modules using nix repl is also lacking and/or frustrating. It definitely has rough edges. I do hope it will improve.
For perspective, I’ve been running NixOS on my main workstation going back a few releases now.
When it works, it’s great. I like that I can install (and uninstall) much of the software I use declaratively, so I always have a “clean” base system that doesn’t accumulate numerous little packages at strange versions over time in the way that most workstations where software is installed more manually tend to do.
This is a trade-off, though. Much is made of the size of the NixOS package repository compared to other distros, but anecdotally I have run into more problems getting a recent version of a popular package installed on my NixOS workstation than I had in probably a decade of running Debian/Ubuntu flavoured distros.
If the version of the package you want isn’t available in the NixOS repo, it can be onerous to install it, because by its nature NixOS doesn’t follow some popular Linux conventions like FHS. Typically, you write and maintain your own Nix package, which often ends up similar to fetching a known version of a package from a trusted source and then following the low-level build-from-source process, but all wrapped up in Nix incantations that may or may not be very well documented, and sometimes with a fair bit of detective work to figure out all the versions and hashes of not just the package you want but also all its dependencies, which may in turn need packaging similarly themselves if you’re unlucky.
It’s also possible to run into this when you’re not installing whole software applications, including those that are available from the NixOS package repository, but rather things like plug-ins for an application or libraries for a programming language. You might end up needing a custom package for the main application so that its plug-in architecture or build system can find the required dependencies in the expected places when you try to install the extra things. Again, this is all complexity and hassle that just doesn’t happen on mainstream Linux distros. If I install Python and then `pip install somepackage` then 99.9% of the time that just works everywhere else but frequently it won’t work out of the box on NixOS.
It’s one of those things that is actually perfectly reasonable given the trade-offs that are explicitly being made, yet still makes NixOS time-consuming and frustrating in a way that other systems simply aren’t when you do run into the limitations.
This comment is already way too long, so I’ll just mention as a footnote that NixOS also tries to reconcile two worlds, and not all Linux software is particularly nicely arranged to be managed declaratively. So in practice, you still end up with some things being done more traditionally/imperatively anyway, and then you have a hybrid system that compromises some of the main benefits of the declarative/immutable pattern. There are tools like Flakes and Home Manager that help to overcome some of this as well, and as others have said, they are promising steps in good directions, but we’re not yet realising the full potential of this declarative style and it’s hard to see how we get from here to there quickly.
> I liken Nix to source control in the time of CVS
I think this is my favourite comment about Nix ever. I'm not going to stop using Nix until a genuinely better alternative arrives, but that day can't come soon enough
My Linux desktop experience has been 2 years of Ubuntu/Debian, 4 years of Fedora and 2 years of NixOS. Hands down, and NixOS is my favorite. It's easy to recover from issues since I've gotten the hang of the build error messages and/or I can just reset my config to the last commit. It took me one year before jumping into flakes and glad I did. Next year, I'm going into Home Manager.
A custom GPT have been surprisingly helpful after feeding it manuals for nix, nixpkgs and NixOS and other Linux books.
I had the opposite experience because I want to run a lot of software in random repos.
I can make a nix-shell for each project but then every nix upgrade was forcing me to go through a lengthy reinstall + wrecking compatibility sometimes.
Not to mention the amount of derivations I had to write myself just to use latest packages.
Using things like virtualenv instead of nix-shell can fix the general instability, but packaging is too big of a problem.
> because I want to run a lot of software in random repos.
Containers, and snapshots+clones are your friend. For a while I was doing ZFS snapshots and clones of Gentoo userlands.
However, if you knew how bad things really are with glibc and how not-well designed Linux is to resist badly behaving software, and how easily some big players can inject badly behaving software into the channels you are fetching from, you would probably seriously consider Qubes.
illumos is a kernel you can rely on to run somewhat arbitrary software.
I personally know someone who runs an "endpoint as a service" with full MDM configurable via web-ui ; the endpoints (the laptops) run on a Linux kernel.
There’s also been a ton of very advanced development in the space. You can now take bootable containers and use them to reimage machines and perform upgrades. Extend your operating system using a Dockerfile as you would your app images:
https://github.com/containers/bootc