After reading the web page posted above, and the comments in here, I've reached the conclusion that Docker, Nix, etc are trying to solve a problem that shouldn't exist, and that the actual problem is at the operating systems' provided mechanisms for dealing with programs and dependencies.
More specifically, it seems like the Unix filesystem layout, with its bin, etc, usr and other well-known directories is part of the problem.
Also, the environment variables mechanism is part of the problem.
Why couldn't we have a much simpler mechanism for running things? for example:
a) each application should live in its own directory.
b) each file or folder should be versioned.
c) common files for applications, for example, fonts, should be installed in the application folder themselves, and the filesystem shall provide hooks for external systems to be notified when these files are created, moved or deleted. With this system, important files, for example tool executables, would automatically be found and invoked without the need for adding a path to them.
d) runtime program dependencies should be provided via text file with parameters and not by environment variables. This text file would have to be specified each time a program runs, and the user could create specific scripts that run the program with specific parameters. Programs should have attached, in the same folder, the default parameters file. How's that different from environment variables? well, with this system there is no system-wide dependency of a program to a unique string, and therefore the requirement to override environment variables for specific programs wouldn't exist. I.e. programs shouldn't need environments to run, but list of dependencies.
With the above solution, making repeatable and sharable development and test environments would be extremely easy: just copy the application folders and the configuration files one needs. If something is missing, copying it from another source would be the solution.
Ultimately, having a set of programs at hand and wanting to combine them to produce some output is exactly the same as having a bunch of functions in a program and invoking them in a specific order to get some result. And while in programming we have recognized the problem of global state and we have taken measurements to minimize the problem, for example with functional programming, we haven't done so for programs.
In other words, the problem is that we are trying to compute things using global state, once more, this time at the operating system level!
For me, that's the fundamental issue. All these tools are welcomed, but unless the fundamental problem is solved, no real progress shall be made and there is always going to be tool fragmentation (i.e. do we use Nix? Docker? Some other solution? etc).
> the actual problem is at the operating systems' provided mechanisms for dealing with programs and dependencies.
More specifically, it seems like the Unix filesystem layout, with its bin, etc, usr and other well-known directories is part of the problem.
>
> Also, the environment variables mechanism is part of the problem
The way I see it, Nix exactly solves the "fundamental problem" as you describe it!
More specifically, it seems like the Unix filesystem layout, with its bin, etc, usr and other well-known directories is part of the problem.
Also, the environment variables mechanism is part of the problem.
Why couldn't we have a much simpler mechanism for running things? for example:
a) each application should live in its own directory.
b) each file or folder should be versioned.
c) common files for applications, for example, fonts, should be installed in the application folder themselves, and the filesystem shall provide hooks for external systems to be notified when these files are created, moved or deleted. With this system, important files, for example tool executables, would automatically be found and invoked without the need for adding a path to them.
d) runtime program dependencies should be provided via text file with parameters and not by environment variables. This text file would have to be specified each time a program runs, and the user could create specific scripts that run the program with specific parameters. Programs should have attached, in the same folder, the default parameters file. How's that different from environment variables? well, with this system there is no system-wide dependency of a program to a unique string, and therefore the requirement to override environment variables for specific programs wouldn't exist. I.e. programs shouldn't need environments to run, but list of dependencies.
With the above solution, making repeatable and sharable development and test environments would be extremely easy: just copy the application folders and the configuration files one needs. If something is missing, copying it from another source would be the solution.
Ultimately, having a set of programs at hand and wanting to combine them to produce some output is exactly the same as having a bunch of functions in a program and invoking them in a specific order to get some result. And while in programming we have recognized the problem of global state and we have taken measurements to minimize the problem, for example with functional programming, we haven't done so for programs.
In other words, the problem is that we are trying to compute things using global state, once more, this time at the operating system level!
For me, that's the fundamental issue. All these tools are welcomed, but unless the fundamental problem is solved, no real progress shall be made and there is always going to be tool fragmentation (i.e. do we use Nix? Docker? Some other solution? etc).