Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don't understand why this isn't done

Because when there's a security update to (say) OpenSSL, it's better for the maintainers of just that library to push an update, as opposed to forcing every single dependent to rebuild & push a new release.



My main issue with this rationale is that, in the vast majority of production environments (at least the ones I've seen in the wild, and indeed the ones I've built), updating dependencies for dynamically-linked dependents is part of the "release" process just like doing so for a statically-linked dependent, so this ends up being a distinction without a difference; in either circumstance, there's a "rebuild" as the development and/or operations teams test and deploy the new application and runtime environment.

This is only slightly more relevant for pure system administration scenarios where the machine is exclusively running software prebuilt by some third-party vendor (e.g. your average Linux distro package repo). Even then, unless you're doing blind automatic upgrades (which some shops do, but it carries its own set of risks), you're still hopefully at least testing new versions and employing some sort of well-defined deployment workflow.

Also, if that "security update" introduces a breaking change (which Shouldn't Happen™, but something something Murphy's Law something something), then - again - retesting and rebuilding a runtime environment for a dynamically-linked dependent v. rebuilding a statically-linked dependent is a distinction without a difference.


I would have agreed with this statement about five years ago. (Even though you would have had to restart all the dependent binaries after updating the shared libs.)

Today, with containers becoming increasingly the de facto means of deploying software, it's not so important anymore. The upgrade process is now: (1) build an updated image; (2) upgrade your deployment manifest; (3) upload your manifest to your control plane. The control plane manages the rest.

The other reason to use shared libs is for memory conservation, but except on the smallest devices, I'm not sure the average person cares about conserving a few MB of memory on 4GB+ machines anymore.


> Today, with containers becoming increasingly the de facto means of deploying software

I think that's something of an exaggeration.

Yes, containers are popular for server software, but even then it's a huge stretch to claim they are becoming de facto.


App bundles on MacOS and iOS are basically big containers, though there is some limited external linking through Apple's frameworks scheme.

And obviously video game distribution has looked like this since basically forever as well.


> App bundles on MacOS and iOS are basically big containers, though there is some limited external linking through Apple's frameworks scheme.

There's a file hundreds of megabytes large containing all the dynamically-linked system libraries on iOS to make your apps work.


Video games do not run on/as containers. Quite the opposite, in fact.


In addition to pjmlp's list, Steam is pushing toward this for Linux games (and one could argue that Steam has been this for as long as it's been available on Linux, given that it maintains its own runtime specifically so that games don't have to take distro-specific quirks into account).

Beyond containers / isolated runtime environments, the parent comment is correct about games (specifically of the console variety) being historically nearly-always statically-linked never-updated monoliths (which is how I interpreted that comment). "Patching" a game after-the-fact was effectively unheard of until around the time of the PS3 / Xbox 360 / Wii (when Internet connectivity became more of a norm for game consoles), with the sole exception of perhaps releasing a new edition of it entirely (which would have little to no impact on the copies already sold).


Kind of.

They do on XBox, Swift, iOS, Android sandboxes.


> Today, with containers becoming increasingly the de facto means

This assertion makes no sense at all and entirely misses the whole point of shared/dynamic libraries. It's like a buzzword is a magic spell that makes some people forget the entire history and design requirements up to that very moment.


Sometimes buzzwords make sense, in the right context. This was the right context.

Assuming you use containers, you're likely to not log into them and keep them up to date and secure by running apt-get upgrade.

The most common workflow is indeed: build your software in your CI system, in the last step create a container with your software and its dependencies. Then update your deployment with a new version of the whole image.

A container image is for all intents and purposes the new "static binary".

Yes, technically you can look inside it, yes technically you can (and you do) use dynamic linking inside the container itself.

But as long as the workflow is the one depicted above, the environment no longer has the requirements that led to the design of dynamic linking.

It's possible to have alternative workflows for building containers: you could fiddle with layers and swap an updated base OS under a layer containing your compiled application. I don't how common is that, but I'm sure somebody will want/have to do it.

It all boils down to whether developers still maintain control over the full deployment pipeline as containers penetrate the enterprises (i.e. whether re retain the "shift to the left", another buzzword for you).

Containers are not just a technical solution, they are the embodiment of the desire of developers to free themselves from the tyranny of filing tickets and waiting days to deploy their apps. But that leaves the security departments in enterprises understandably worried as most of those developers are focused on shipping features and often neglecting (or ignoring) security concerns around things that live one layer below the application they write.


Shared libraries have largely proven that they aren't a good idea, which is why containers are so popular. Between conflicts and broken compatibility between updates, shared libraries have become more trouble than they are worth.

I think they still make sense for base-system libraries, but unfortunately there is no agreed upon definition of 'base-system' in the wild west of Linux.


And the reason we're using containers in the first place is precisely because we've messed up and traded shared libs for having a proven-interworking set of them, something that can trivially be achieved using static linking.


Actually the main selling point of containers has nothing to do with "proven interworking", but the ability to deploy and run entire applications in a fully controlled and fully configurable environment.

Static libraries do nothing of the sort. In fact, they make it practically impossible to pull it off.

There's far more to deploying software than mindlessly binding libraries.


On Windows, I don't need to use Docker in order to run a program in a reproducible way. I just download a program, and in 90% of cases it "just works" whether I'm running Windows 10, Windows 8, or the decade-old Windows 7.

Furthermore, installing that program will (again, in 90% of cases at least) not affect my overall system configuration in any way. I can be confident that all of my other programs will continue to work as they have.

Why? Because any libraries which aren't included in the least-common-denominator version of Windows are included with the download, and are used only for that download. The libraries may shipped as DLLs next to the executable, which are technically dynamic, but it's the same concept—those DLL's are program-specific.

This ability is what I really miss when I try to switch to desktop Linux. I don't want to set up Docker containers for random desktop apps, and I don't want a given app to affect the state of my overall system. I want to download and run stuff.

---

I realize there's a couple of big caveats here. Since Windows programs aren't sandboxed, misbehaving programs absolutely can hose a system—but at least that's not the intended way things are supposed to work. I'm also skipping over runtimes such as Visual C++, but as I see it, those can almost be considered part of the OS at this point. And I can a ridiculous number of versions of MSVC installed simultaneously without issue.


> On Windows, I don't need to use Docker in order to run a program in a reproducible way. I just download a program, and in 90% of cases it "just works" whether I'm running Windows 10, Windows 8, or the decade-old Windows 7.

One program? How nice. How about 10 or 20 programs running at the same time, and communicating between themselves over a network? And is your program configured? Can you roll back changes not only in which versions if the programs are currently running but also how they are configured?

> This ability is what I really miss when I try to switch to desktop Linux. I don't want to set up Docker containers for random desktop apps,

You're showing some ignorance and confusion. You're somehow confusing application packages and the natural consequence of backward compatibility with containers. In Linux, deploying an application is a solved problem, unlike windows. Moreover, docker is not used to run desktop applications at all. At most, tools like Canonical's Snappy are used, which enable you to run containerized applications in a completely transparent way, from installation to running.


> the ability to deploy and run entire applications in a fully controlled and fully configurable environment

But isn't the reason to have this fully controlled and fully configurable environment to have a proof of interworking? Because when environment is in any form different you can, and people already do, say that it's not supported.


> But isn't the reason to have this fully controlled and fully configurable environment to have a proof of interworking?

No, because there's far more to deploying apps than copying libraries somewhere.


> Actually the main selling point of containers has nothing to do with "proven interworking", but the ability to deploy and run entire applications in a fully controlled and fully configurable environment.

Which is exactly the same selling point as for static linking.


Some of us use linux as a desktop environment, and like having the security patches be applied as soon as the relevant package has updated.


As a user of the Linux desktop, I really love it when library updates break compatibility with the software I use too. Or can't be installed because of dependency conflicts.

Containers are popular because shared libraries cause more trouble than they are worth.


Containers most likely wouldn't have existed if we had a proper ecosystem around static linking and resolution of dependencies. Containers solve the problem of the lack of executable state control, mostly caused by dynamic linking.


More broadly, containers solve the problem of reproducibility. No longer does software get to vomit crap all over your file system in ways that make reproducing a functioning environment frustrating and troublesome. They have the effect of side-stepping the dependencies problem, but that isn’t the core benefit.


But the images themselves are not easily reproducible with standard build tooling.


True—but that's far less of a problem, because it rarely occurs unexpectedly and under a time crunch.

Diffing two docker images to determine the differences between builds would be far less onerous than attempting to diff a new deployment against a long-lived production server.


Dynamic linking isn't the issue. Shared libraries are the issue. You could bundle a bunch of .so files with your executable & stick it in a directory, and have the executable link using those. That's basically how Windows does it, and it's why there's no "dependency hell" there despite having .dlls (dynamically linked libraries) all over the place.

Shared libraries are shared (obviously) and get updated, so they're mutable. Linux systems depend on a substantial amount of shared mutable state being kept consistent. This causes lots of headaches, just as it does in concurrent programming.


Based on my experience this is very rarely the case unless you have an extremely disciplined SecOps team.


> Based on my experience this is very rarely the case

You must have close to zero experience them because that's the norm on any software that depends on, say, third-party libraries that ship with a OS/distro.

Recommended reading: Debian's openssl package.

https://tracker.debian.org/pkg/openssl


You are talking about a FOSS project, I am talking about a company that has a service that uses OpenSSL in production.


These are not diametrically opposed. Your company can have a service that uses OpenSSL in production that runs on Debian to automatically take advantage of Debian patches and updates if it's linked dynamically to the system provided OpenSSL.

You can either employ an extremely disciplined SecOps team to carefully track updates and CVEs (you'd need this whether you're linking statically or dynamically) or you can use e.g. Debian to take advantage of their work to that end.


Every single company that I used to work for had an internal version of Linux that they approved for production. Internal release cycles are disconnected from external release cycles. On the top of that, some of these companies were not using system-wide packages at all, you had to reference a version of packages (like OpenSSL) during your build process. We had to do emergency patching for CVEs and bump the versions in every service. This way you can have 100% confidence what a particular service is running with a particular version of OpenSSL. This process do not depend on Debian's (or other FOSS vendor's) release cycles and the dependencies are explicit, therefore the vulnerability assessment is simpler (as opposed to go to every server and check which version is installed). Don't you think?


If you need that level of confidence - sure. But it's going to cost a lot more resources and when you're out of business your customers are fully out of updates. I wouldn't want to depend on that (then again a business customer will want to maintain a support contract anyway).

Isn't a containerized solution a good compromise here? You could use Debian on a fixed major release, be pretty sure what runs and still profit from their maintenance.


What I'm saying is that the only way you can get away with not having an "extremely disciplined SecOps team" is to depend on someone else's extremely disciplined SecOps team. Whether you link statically or dynamically is orthogonal.

> Every single company that I used to work for had an internal version of Linux that they approved for production.

I can't deny your experience, but meanwhile I've been seeing plenty of production systems running Debian and RHEL, and admins asking us to please use the system libraries for the software we deployed there.

> Internal release cycles are disconnected from external release cycles.

That seems to me like the opposite of what you'd want if you want to keep up with CVEs. If you dynamically link system libraries you can however split the process into two: the process of installing system security updates doesn't affect your software development process for as long as they don't introduce breaking changes. Linking statically, your release cycles are instead inherently tied to security updates.

> We had to do emergency patching for CVEs and bump the versions in every service.

What is that if not tying your internal release cycles to external release cycles? The only way it isn't is if you skip updates.

> This process do not depend on Debian's (or other FOSS vendor's) release cycles and the dependencies are explicit, therefore the vulnerability assessment is simpler (as opposed to go to every server and check which version is installed). Don't you think?

I don't know, going to every server to query which versions of all your software they are running seems similarly cumbersome. Of course, if you aren't entirely cowboying it you'll have automated the deployment process whether you're updating Debian packages or using some other means of deploying your service. Using Debian also doesn't make you dependent on their release cycles. If you feel like Debian isn't responding to a vulnerability in a timely manner, you can package your own version and install that.


> You are talking about a FOSS project

I'm talking about the operating system that's pretty much a major component of the backbone of the world's entire IT infrastructure, whether its directly or indirectly through downstream distros that extend Debian, such as Ubuntu. Collectively they are reported to serve over 20% of the world's websites,and consequently they are the providers and maintainers of OpenSSL that's used by them.

If we look at containers, docker hub lists that Debian container images have been downloaded over 100M times, and ubuntu container images have been downloaded over 1B times. These statistics don't track how many times derived images are downloaded.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: