Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As much as I admire the people involved with Nix, taking the initiative to solve what they believe can be improved and the other developers like her, toying with new things and successfully managing to figure all this stuff out... I'm actually disappointed.

The thing is, Nix is an absurdly complex piece of software, perhaps on a similar order of magnitude as maybe Kubernetes or other gizmos, requiring a very large time commitment to understand and to use properly.

Just for fun, have any of you tried to keep up during the Docker bigbang, with all these technologies popping up left and right, switching names, dying abruptly, getting super-seeded or hacked with vulnerabilities? That was one of my most mentally draining year for me and I've been in the field for a while now.

See... along the way, I've learned that Computer Science is always about tradeoffs. Now when I'm being asked to trade away my sanity learning this nonsense for in return a couple of megabyte of disk space that could've been wasted, I just don't see the value.

Seriously, are we optimizing for the right stuff here? I know Nix is doing package isolation with a programmable environment defined by text files, it's immutable and all. I get it. But we have alternatives for these problems. They do a pretty good job overall and we understand them.

Ironically, even those solutions I'm comfortable with are still struggling to get good adoption! How the heck is our field going to avoid fragmentation when it keep growing exponentially like this.

Perhaps it's just me aging and getting groggy; though there must be an explanation for this phenomenon. Please, it definitely gnaws at me.



> Seriously, are we optimizing for the right stuff here? I know Nix is doing package isolation with a programmable environment defined by text files, it's immutable and all. I get it. But we have alternatives for these problems. They do a pretty good job overall and we understand them.

Setting up a development environment five years ago: here's a Word document that tells you which tools to install. Forget about updating the tools - good luck trying to get everyone to update their systems.

Setting up a development environment two years ago: here's a README that tells you which commands to run to build Docker images locally which will build and test your code. OS X users have fun burning RAM to the unnecessary VM gods when they have little to spare to begin with because MacBook Pros. No reproduceability between Dev and CI despite using container images because Dev will have debugging tools made available in the image that will be removed from the final image. Be limited while debugging locally because all debugging must happen over a network connection.

Today: Install Nix. Run build.sh. Run test.sh. Run nix-shell if you want more freedom.


> Today: Install Nix

I'm missing something here. (Disclaimer: I've just tried to understand the OP, I don't know all the details about what is going on there).

The text starts with "In my last post about Nix, I didn’t see the light yet." ... Then it continues with "A popular way to model this is with a Dockerfile." So I have expected that the post will demonstrate how Nix can be used without docker... But then later:

"so let’s make docker.nix:"

and continues in that way. So there is some dockering involved? To this casual reader, it doesn't seem that using Nix allows one to avoid docker, as a technology, but just that some additional goal is achieved by using Nix over that.

I just missed what was actually achieved, other than that the text mentions a few megabytes less here or there, of 100 MB, and that I also miss the information if the megabytes were traded in the build time or, if I understand correctly, in some way even more dependencies (more places from which something has to download something). Can anybody explain?

I'm sure that these tradeoffs were obvious to the author, but I as a reader hoped to somehow get the idea of those, and I missed that (yes, it's hard to "see the light" especially indirectly).


As a person that uses Nix to build Docker images in production, let me explain.

Docker is a tool that lets you package a Linux distribution and roll it out on any random server cleanly and safely. But the stuff inside that distribution is still very much bespoke and unmaintainable.

Nix is a tool that lets you create a custom Linux distribution with absolutely minimal effort. (Basically, just list your packages in a file and hit 'go'.) But the packaging story for Nix is pretty bad.

To bridge that gap, Nix has code that puts a Nix package with all dependencies into a Docker container. It works, but of course kind of icky; something more integrated and smart would be preferred.


> Docker is a tool that lets you package a Linux distribution and roll it out on any random server cleanly and safely. But the stuff inside that distribution is still very much bespoke and unmaintainable.

This, a thousand times. If you are deploying the same container 10,000 times, with no modifications - well it makes sense to spend time maintaining that distribution. But if you're using Docker for dev environments (for example) and each one is different... hiiiya you haven't really moved on in terms of maintainability than using Vagrant or any VM setup.


There's two different use cases:

- development

- production

For development, using Nix or Guix is in my opinion extremely nice. Using Docker in development would mean mounting your dev directory as a volume into a dev-container and depending on your application, this might end up being a pain in the ass. Editors often depend on the same dependencies to compile your code on the go and check for errors - if you don't have these on the host, you will then have to start solving new problems.

With Nix, you can have a package definition and either install all the necessary dependencies globally on your machine or spawn a shell with all of the needed binares in your PATH. Recently an application needed a different Node version than the one that was globally installed on my machine. Instead of having to build a Dockerfile or whatever, I just spawned a shell with the newer Node version, ran the command that I needed and was done.

For production, you might still want to use Docker, as there's a lot of great software built on top of it (Kubernetes and other platform-specific managed services). You can turn a Nix/Guix package definition into a Docker image quite easily, if you already have one. As extra benefit, you remove the small chance of still ending up with incompatible dependencies that you get when using a traditional package manager in a Dockerfile.


Nix itself doesn't use Docker, and you can deploy it like that just fine (with NixOS, and NixOps if you have multiple machines).

But sometimes your ops team has standardized on Docker, maybe because you're using Kubernetes. In that case, Nix will happily build those images for you.


The big one that I got is that the resulting Nix image has just the Go executable in it, and so the server is safer because if anyone hacks into it they'd need to bring their own copy of any tools they wanted with them. I'm a huge fan of reducing attack surfaces wherever possible, and getting a container that will only run the program required and nothing else is a win for me.


> The big one that I got is that the resulting Nix image has just the Go executable in it

Now I miss the point of that one too: if just a Go executable alone can be enough anyway, as it is statically linked, why not just copying it, instead of complicating around?


A lot of people have standardised on Docker images as the default distribution/packaging format as Kubernetes etc make deploying/running them more standardised across orgs.

You can build the binary then have a Dockerfile copy it in from the scratch base image. However if you are using nix for deterministic builds you might as well add the few lines of code to have nix build the docker image to vs a Docker file with a single copy command.

If you are not building a static binary you get the advantage nix will copy in only the dependencies needed. You also get the advantage when you are building images you not randomly downloading urls from the internet that may be dead. Artifacts can come from a nix build cache which is cryptographically signed based on the build inputs so you know building the same image every time produces the same output.

With typical Dockerfiles that is not true. Docker images are not immutable so fetching the same Docker image may result in a different image being fetched. Likewise a lot of Docker files just wget / yum install random packages from places that may not exist anymore. If you maintain your own nix build cache you will always be able to build, get speedup from hitting the build cache vs compiling and know the build is deterministic. Running the same build multiple times will result in the exact same output.


Because you get to use the same tooling to build Docker images that need more than that. Depend on a C shared library via cgo somewhere? Have a directory full of templates and other resource files that need to ship with it? Maybe the Go program needs to shell out to something else? You don’t have to rework your tooling or hack a random shell script up.


That's a "why Docker?" question, rather than a "why Nix?" question. But it's a very good question, and one that I struggle to answer myself.


> Setting up a development environment five years ago: here's a Word document that tells you which tools to install.

If this is how you were setting up a development environment 5 years ago, anything would be better.


Nix does not set up my Visual Studio and XCode environments, Unity, Unreal, nor OEMs SDKs.


Look up nix home manger, you might be able to have nix configure everything in your home directory.

Also nix makes it pretty trivial to package and patch propriety binary vendor software. It would take at least one person on the development team feeling comfortable with nix, but they could craft a nix file that does everything.


If you find something that can do all that reliably I would love to hear about it! I've got some Powershell scripts but it's still a several step process.


[flagged]


Sadly the supermarket around the corner doesn't take pull requests as payment.


[flagged]


Why should I be embarrassed for dumping some weekend coding garbage just to keep HR happy that everyone is expected to have something on Github?

Everyone here knows my stuff long time ago, or is knowledgeable enough to find it, given that my online life goes back to the BBS days, do you think it embarrasses me, really?

Corporate pays my bills, not songs about birds, rainbows and how everyone should stick it to the man.


Talk is cheap; show us your PRs to nixpkgs, or stop complaining that your particular unfree trash isn't included in nixpkgs.


Please don't post in the flamewar style to HN. It's against the rules and we ban accounts that do it.

https://news.ycombinator.com/newsguidelines.html


Right it is only fair, after all football supporters also must go down into the field to be allowed to voice their opinion.


You skipped the "spend 3 hours figuring out why libgcc_s.so.1 won't link when trying to compile that one protobuf tool you need right now" step.

Seriously, I dread every time I have to compile C++ in NixOS and there's no existing derivation already.

Oh, and when the Omnisharp guys start requiring a new version of mono that won't build the same way for whatever reason, so you're stuck on old versions of your IDE plugins if you want C# support until someone else figure out what broke.


Well, there is a well defined pattern using configure and make. When very complex builds like GNU Emacs and operating systems have done quite well with this pattern, I wonder what problem we are trying to solve.


That's a very apple to oranges comparison, wouldn't you agree?

GNU autotools assists in making source code packages portable accross Unix-like systems, while make is for build automation and has fine-level dependencies (inter-files).

Nix is a package manager, and though is can build your software for you transparently, it is not tied to and specific software configurationand building toolset, can manage buildtime and runtime dependencies (including downloading the sources), caching build results transparently, etc.


You can still re-install all of the dev tools on top of your prod images.


> Nix is an absurdly complex piece of software

I'd argue the opposite—it's beautiful in it's simplicity. It's a tool that lets you express how to create something in a deterministic way.

A large part of what creates the learning curve that's best described as a wall is that this simplicity is due to building on a number of concepts that themselves are likely initially foreign: a pure functional language, content addressable storage etc. These are all good concepts to learn about in isolation, but as with any domain, learning a large number of new things at the same time (particularly those that may directly conflict with your current mental models) is hard.

What's truly nice about the nix approach is that it is not bound to any stack. You can use it for any language, for building any software (Nix - the package manager). You can also use it to define a entire machine config (NixOS), or even an entire collection of machines (NixOps). It is not something that you need to re-learn every few months, or learn then not use because some of your other tooling changed. Yes, the initial mental load is high, but the return on it is continuous and you'll likely learn about some other useful things along the way.


>> Nix is an absurdly complex piece of software

>I'd argue the opposite—it's beautiful in it's simplicity. It's a tool that lets you express how to create something in a deterministic way.

I'd take a different angle and argue that Nix is Version 1 of what to me looks like a very cool idea, and it's both beautiful and not straightforward to use.

If that matters, wait for version 5. Look at the progression before Docker came on the market - it does what other tools could do, but packaged in a nice, developer-friendly toolkit. I expect the same for Nix in a few years time.


You have the right idea, but you'll be waiting for a long time. Nix is transitional, and a version of Nix which appropriately enforces package-capability discipline likely will not give a traditional Linux/BSD userland. Instead, use Nix today, and shape its future; Nix is currently on version 2.


I think much of your criticism against Nix is undeserved. None of your arguments against Nix have any concrete specifics, and your rant about Docker doesn't even have anything to do with Nix. It's fine if you don't want to take the time to learn every single new technology that pops up; everyone has to be selective about what they choose to learn because we can only have so much time. But why criticize what you didn't take the time to sit down and learn, especially when it's not even being forced upon you?


I'm not going to go deploying anything in this blog post to production, if I deploy a Go app it will probably look like the first Dockerfile (and probably with an official Go image rather than a homebrewed one.) That said I really appreciate people like the author doing all this work because in 10-20 years I think most people will be using, though maybe not Nix/Kubernetes/Docker, something that combines their best features and requires none of this fiddling.

But we're not going to get there without people doing this sort of stuff that can't possibly be worth the effort today.


As the author of this post, copy-pasting things out of my blog into your production cluster is probably a mistake.


[flagged]


Except I do do that stuff? I'm currently writing a draft of how to incorporate formal design semantics into "devops" flows. I also have a half-completed set of instructions on how to make a private Homebrew repo and have my own private Alpine Linux repo.


I don't think anybody who actually read the post has the misgivings that the poster to whom you replied so liberally sprinkles into every comment. YHBT. Sorry about that.


Well I have those misgivings, and while we are on the subject, since when do I represent you? That's news to me!


No... that is not system engineering. System engineering would be thinking about how the program to enable configuration templating with shell variables will work, how you can consistently add and remove templated, variable expandable configuration excerpts to be able to have configuration overlays, how you could design a configuration self-assembly service other OS configuration packages could call, writing that down, then implementing and packaging that in an OS package, so your other OS packages could depend, call and use it in a consistent fashion. Or thinking about how your standard database settings should be configured, writing a formal specification, then writing a tool to implement that, packaging it into an OS package, and then having your OS database configuration packages formally depend on it and call it to consistently create databases. These are just few examples of thinking about architecture, as opposed to writing how to documents.


We just keep trying to hide complexity and fragility with more layers of redirection and abstraction - it doesnt solve the problem, but it makes it tomorrows problem.

In the end, excessively clever people make me nervous, because they leave complex problems in their wake, which I'm not clever enough by half to solve.


I agree with you, mostly, but I really think Nix is one of those rare abstractions that is easily worth it: it has the potential to simultaneously solve and unify a bunch of different issues: reproducable builds, package management (system & language), container creation, system config, and so on. One language across all those domains is great, and the ability to rollback an entire system is amazing. Also: the ability to add your own software, GitHub tarballs, and piles as PHP as first-class citizens (you can cleanly remove them!) isn't emphasized enough.

You can keep your wget, build script, db config, and nginx config all in the same file!

I think if you weighed the complexity of Nix, and NixOS in particular, against the tools it replaces (or anyway, could potentially replace) it's a no-brainer. I'm a convert.


This is an annoyingly ignorant comment.

Nix, unlike say docker, actually solves a problem correctly in a principled way: reliably and reproducibly going from source artefacts and their dependencies to installed binaries/libraries/etc and without conflicts or undesired interactions between other installed binaries/libraries/etc, even different versions of the same thing.

It represents a major innovation and step forward in the depressingly dysfunctional mainstream OS landscape, that still is stuck with all the abstractive power of 80's BASIC but with concurrency and several orders of magnitude more state to PEEK and POKE.


It doesn't solve it, because it is confined to a relatively niche GNU/Linux distribution, unlike Docker.


Nonsense. You can run nix on basically any linux system (not just NixOS) and macOS. Not just theoretically, I'm doing both and am not running NixOS itself at all ATM. It's also the only sane way to build docker images. Since nix itself of course runs fine inside docker, I suspect you should be able to build a docker image on Windows that way, too, but I haven't tried.

Also, even if you come up with a forward facing solution and demonstrate it only on a particular domain you still, IMO, deserve most of the credit for solving the problem even if it takes others to translate it straightforwardly to other domains.


Nix only "works" in macOS, in the context of pretty UNIX, software development on macOS is usually something else.


I'm not sure I'm following. Isn't this argument applicable against Docker as well?


No, Docker on Windows can make use of Windows containers, and also interoperates with any kind of Windows or macOS application.


In what way does docker on macOS interoperate better with the rest of the system than a nix package? I'm pretty sure the answer is going to be "none" – you can build native, including UI apps with nix (not that I recommend throwing away xcode for your app store development and switching to nix). How do you do that with docker?

I'm less sure about windows, can you explain a bit more how you use docker containers for providing "native" windows stuff (as opposed to as a more lightweight linux VM replacement when developing something that you really want to deploy on linux)?


For example deploying IIS based applications, including some Windows services and COM libraries.


Thanks, I wasn't aware of this.


Out of curiosity, why are you using docker containers on windows? For emulate static linking or to provide network/process isolation (and does docker under windows provide "proper" i.e. secure isolation?) or something else entirely?


Configure a couple of application servers from scratch and manually configuring the services starts to get tiring.

Usually the traditional option would be to snapshot the VM right after the first successful install.

However stuff like Windows Nano now makes it interesting to start playing with containers on Windows.


> It's also the only sane way to build docker images.

That is definitely an exaggeration, considering the utility the industry leverages with plain Dockerfile's.

However, I am curious about the specifics that you are probably thinking of, if you'll elaborate.


Nix, the language, is not tied to any platform. Anyone willing to make it run in a new platform, can make the appropriate pull request(s) to make it happen.


Nix, the package manager, can be installed on any Linux distribution or on macOs. It's not confined to NixOS. It runs pretty much anywhere Docker does.


Docker also runs on Windows....


So why not run nix in docker till Windows gets first class support?


Will it? Nix solves a unixy problem in a unixy way. The Nix/WSL story might get happier at some point to solve a problem for Windows based developers of Linux based servers, but that's effectively just "running nix in docker", except that the interface is WSL instead of Docker.

I know some people write Windows software from Nix, but this is a way for Linux developers to make desktop apps for Windows users, not a solution to any problem Windows developers have.


I can see three broad reasons people develop on Windows:

1. They want to create a Game or Windows end user UI App (e.g. Overwatch or Photoshop)

2. They write some internal corp tooling or B2B software for windows shops (e.g. gluing some SAP or Oracle garbage together)

3. They want to write a server application (which means deploying on linux, which is basically the only server OS left). But they do not want to run linux as their desktop OS for corp or private reasons.

The 1st category probably has limited use for either docker or nix (in the absence of first class windows support). Might still be useful for tooling though.

The second category probably has use for docker mostly as a shitty linker, maybe also for isolation/security (I don't know the windows docker ecosystem).

The 3rd category can and totally should be using nix and I'd guess is at least double digit market share (so not insignificant; e.g. before Google banned them, a large fraction of Googlers had windows notebooks).


Because it is a kludge without support for Windows development workflows?


That's an excellent reason not to -- I wasn't previously aware that docker had native windows application support on windows.


Nix can run on WSL though. So it indirectly runs on Windows just like Docker run on Windows...


Docker makes use of Windows containers....

I don't use Windows as pretty UNIX, rather for its own capabilities.


Did anyone try running Nix in WSL?


Yes. If you're trying to develop Linux software from Windows, Nix works to the extent that the rest of your software works. But the experience of working on WSL is not the experience of working on Linux. And it isn't likely to help you build your Windows based programs.


How do you make Dockerfiles reproducible?


No Nix is not like the other lying leaky abstractions. Nixpkgs takes great effort not to slap another layer in top, but actually wade through the muck and actually fix problems at their source.

So sound very alienated and have a realistically cynical that accurately reflects of most of the industry. But please don't assume everything has to be that way. Tools like Nix and community efforts like Nixpkgs really are the exception.


> In the end, excessively clever people make me nervous, because they leave complex problems in their wake, which I'm not clever enough by half to solve.

I absolutely love this quote and very much feel the same. Often times I feel like the industry at large has a bit of a tricky problem, and then in fixing that problem they forget about a whole set of issues the thing they are trying to fix dealt with, so often times they've just traded one set of problems for another, just now with more complexity. Monolith vs. microservices is probably the best example of this: monoliths have a bunch of problems that make developing with large teams hard, but microservices add a whole host (e.g. transactional consistency, performance) of issues that are much more difficult to solve for most dev teams.


I actually disagree with GP. Nix doesn't add a layer of indirection, instead replaces many of the existing layers (common development environment, locking dependencies, reproducible environment, package managment, configuration managment system, image building, CI/CD).

Nix real power is being a language that allows to describe all of those things in a declarative way.


> excessively clever people make me nervous, because they leave complex problems in their wake, which I'm not clever enough by half to solve.

They certainly do! And quite often, the reason they leave those problems is either a) they're not clever enough either, b) there's enough actual work that solving them doesn't make them feel clever, and/or c) there are other things they could do that make them look more clever.

I have made some really good money ripping out somebody's too-clever-by-half solution and replacing it with something simple, well supported, and dreadfully unfashionable.


Same here. Then I come back to the site ten years later on another assignment and the solutions I put in are still there. I ask why and am then told that they work reliably, so why rip them out? There is nothing I can say to that but be glad.


Yes, and while they might be very clever, they are still not clever enough to make complex problems simple, as that requires extreme cleverness, experience and insight.


I know it's less of an issue for go users, but one of the great things about nix is how your runtime dependencies are (mostly) defined by inference from the programs you've built. So, if you build a program that links against libpg.so, the runtime requirements will automatically include that library!

Since your runtime requirements depend on how you compile a system, you usually have to be quite careful with keeping your Dockerfile in sync with what you're building. This busywork just goes away.

Nix involves quite a large upfront time commitment to understand it, but it solves problems that I haven't seen solved elsewhere (well, I guess guix is similar), and does it across all the languages you write for. That it can work across toolchains and languages is a unifying force, and so I think it's one of the better systems for reducing the "exponential fragmentation" referred to above.


Actually, the thing we started using nix for was reliable caching of compiled artifacts, including not just your code, but all the programs and libraries that your code depends on. It's another thing that's difficult to do in a general sense, but if you have a fairly strong notion of reproducible builds, it's possible.


I don't see your point. Kubernetes may be awfully complicated and the wrong solution all along (which, unforntunatelly, doesn't mean it'll die quickliy — C++ was always pretty awful), Nix may be as well, but their actual point is to help your sanity, not "save up a few megabytes of space".

"There are other solutions"?

Well, look, well defined, easily reproducible, immutable environment configurable by text files is something that pretty much everybody wants, even if he never even thought about concepts like this. I mean, literally, your grandma would want it on her iPad, because it would mean you could buy her a new iPad and she couldn't see the difference aside from the fact the glass is not broken anymore. Every programmer wants it, because it would free them from that awful system administrator hegemony. I want it, because getting a new laptop that I plan to use for slightly different purposes than my current hardware, instead of building up my environment from scratch gradually installing more and more shit I forgot years ago even exists, I could review a couple of 200-line config files (which I possibly didn't even write myself, the system just managed to express in a well-readable format what I got by experimenting in the shell years ago), remove some lines, add a couple of others, and get exactly what I want on that laptop.

And we don't really want any tradeoffs here. Yeah, as a real rock'n'rollers we want a fucking lot, but that's the point, we want to find some way to make all problems caused by the complexity of making lots of software and hardware work together disappear.

Yet, I don't see this ultimate dream come true on every — be it real or virtual — device, for every purpose. So be it the problem of marketing, or the "other solutions" being a bit more of a tradeoff than we are ready to accept, or something else, but these "other solutions" don't seem to do as much of a good job as you imply. So, my understanding is that we are just "not there yet". Hence "all these technologies popping up left and right, switching names, dying abruptly, getting super-seeded or hacked with vulnerabilities". There is a dream, maybe even a vision, but there is no solution. Not that I'm aware of. (And, once more, the real solution is not just something that is "possibly possible", but also includes solving the problem of user-adoption.)


Thank you for making me feel like I'm not taking crazy pills.

I share the parent's frustration in trying to keep my head wrapped around what all the different players are in the space. I tend to agree that the more I look into Nix, the less it really seems like it will actually take off. But it's utterly confused to segue from that to "Therefore, they must not be solving any problems we really have."

I'm always wondering when I read one of these posts what setup they must have to get work done. Seriously, what do most people do? How is it apparently so easy to share development environments between your co-workers in a manageable way? And if the answer is "each employee starts out with the same laptop baselined in the same way, and then they work on one project which is in a monorepo, whose system dependencies are already installed on the laptop, and those dependencies never change" then... hello? You don't see that this is not a solution? I fundamentally _do not_ understand how anybody can live in the world and not be annoyed by this. What do they do? WHAT DO THEY DO?!


Creating a docker image is one of many things Nix do. IMO one of the killer feature is actually providing a reproducible dev environment.

Nix besides the build, can provide a shell that contains all the tooling necessary to build the application. The definition lives together in the same repo as application any everyone who uses it will get the exact same tools with the same versions.


It’s always a good idea to pin the nixpkg version in nix shell.

Once you do this it saves all the “it runs on my machine”. No more days following a readme setting up a vagrant/virtualbox machine, which is then slow as a vm with limited memory on a laptop.

Instead you type one command, “nix-shell” and get everything installed locally as required in a deterministic way. Agree, this has been a revelation to teams I’ve worked on.


>tradeoffs

I've had 40 years to observe and think about why this is all happening.

What I think happens: new blood enters the industry, ignores the accrued wisdom of the elder old blood, re-invents things, regurgitates, recycle (every time someone makes money with it), and then .. becomes the old blood.

Like, there seriously is some sort of industry-bound amnesia affect at 3 and 4 orders of scale, in my opinion.

There are those who read the docs, and write new ones/edit old ones. There are those who don't read the docs, but write new ones only. There are also those who read the docs, and don't write anything new, too. Then, there are those who don't read the docs and don't write new docs, either.

Alas, all these are tied up in a universal struggle to gain dominance over each other, and its called the technology startup battle.

Docs don't get old. People do.


Nix community has none of the buzzword churn of the docker community. Our ideas are old as hell, and proudly so.


Indeed. You can still read Eelco Dolstra's thesis [1] from 2006 and the general principles still apply. Some details have changed (e.g. the builder.sh scripts have mostly been abstracted away), but for understanding the Nix store, the Nix language, etc. it's still completely relevant.

[1] Eelco is the creator of Nix: https://nixos.org/~eelco/pubs/phd-thesis.pdf


Docker in and of itself is quite reasonable. You only start seeing buzzwords when you go into DevOps.


How do you get a completely reproducible environment without Nix? I tried doing that with Dockerfiles and it was a real pain.


> But we have alternatives for these problems.

No, we don't.

What else can let me install two different versions of gcc and glibc and choose one with a one-line config change?

Seriously, if you have any suggestions, I'm all ears. I agree Nix is too complex.


Problem with Docker is that it abstracts away one level too much. Somehow it's there to provide container isolation but in an abstraction that almost feels like a VM. So caching is pretty hard, in fact caching is more or less based on the source code layout. Nix doesn't provide this level of isolation, it doesn't pretend to provide a separate machine. Which allows caching of packages, thus eating less disk space and making rebuilds faster.

Not sure how Nix(OS) handles isolation in terms of cgroups/namespaces though, seems that it relies on systemd for that.


You can specify containers in your configuration.nix if you want to run processes in isolation. It doesn't require building an image.


> Now when I'm being asked to trade away my sanity learning this nonsense for in return a couple of megabyte of disk space that could've been wasted, I just don't see the value.

Well said. And not only your sanity, but by using this stuff in production you’re also dragging everyone else’s sanity with you. The benefits need to be compelling and aligned with the business needs, and saving a few MBs here and there at the cost of time is definitely not in alignment except for in the most exceptional cases.

Fun to read about and toy around with, but let’s leave it at that. The first example using multi-stage docker build and alpine is fantastic.


> How the heck is our field going to avoid fragmentation when it keep growing exponentially like this.

It will be corrected the next time we see a major bubble burst/recession. Right now we have literally millions of people around the world working in tech, and everyone has ideas.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: