Actually the main selling point of containers has nothing to do with "proven interworking", but the ability to deploy and run entire applications in a fully controlled and fully configurable environment.
Static libraries do nothing of the sort. In fact, they make it practically impossible to pull it off.
There's far more to deploying software than mindlessly binding libraries.
On Windows, I don't need to use Docker in order to run a program in a reproducible way. I just download a program, and in 90% of cases it "just works" whether I'm running Windows 10, Windows 8, or the decade-old Windows 7.
Furthermore, installing that program will (again, in 90% of cases at least) not affect my overall system configuration in any way. I can be confident that all of my other programs will continue to work as they have.
Why? Because any libraries which aren't included in the least-common-denominator version of Windows are included with the download, and are used only for that download. The libraries may shipped as DLLs next to the executable, which are technically dynamic, but it's the same concept—those DLL's are program-specific.
This ability is what I really miss when I try to switch to desktop Linux. I don't want to set up Docker containers for random desktop apps, and I don't want a given app to affect the state of my overall system. I want to download and run stuff.
---
I realize there's a couple of big caveats here. Since Windows programs aren't sandboxed, misbehaving programs absolutely can hose a system—but at least that's not the intended way things are supposed to work. I'm also skipping over runtimes such as Visual C++, but as I see it, those can almost be considered part of the OS at this point. And I can a ridiculous number of versions of MSVC installed simultaneously without issue.
> On Windows, I don't need to use Docker in order to run a program in a reproducible way. I just download a program, and in 90% of cases it "just works" whether I'm running Windows 10, Windows 8, or the decade-old Windows 7.
One program? How nice. How about 10 or 20 programs running at the same time, and communicating between themselves over a network? And is your program configured? Can you roll back changes not only in which versions if the programs are currently running but also how they are configured?
> This ability is what I really miss when I try to switch to desktop Linux. I don't want to set up Docker containers for random desktop apps,
You're showing some ignorance and confusion. You're somehow confusing application packages and the natural consequence of backward compatibility with containers. In Linux, deploying an application is a solved problem, unlike windows. Moreover, docker is not used to run desktop applications at all. At most, tools like Canonical's Snappy are used, which enable you to run containerized applications in a completely transparent way, from installation to running.
> the ability to deploy and run entire applications in a fully controlled and fully configurable environment
But isn't the reason to have this fully controlled and fully configurable environment to have a proof of interworking? Because when environment is in any form different you can, and people already do, say that it's not supported.
> Actually the main selling point of containers has nothing to do with "proven interworking", but the ability to deploy and run entire applications in a fully controlled and fully configurable environment.
Which is exactly the same selling point as for static linking.
Static libraries do nothing of the sort. In fact, they make it practically impossible to pull it off.
There's far more to deploying software than mindlessly binding libraries.