> 15 years ago, when a project would kick-off, as a sysadmin
> I'd be invited in and the developers and I would hash out
> what versions of each language and library involved the
> project would use
To wit: the new ideologies in infrastructure management are actually designed to solve the underlying problem that necessitated that kind of working setup. Why should the version of a lib in one part of the software somehow pose existential threat to the infrastructure? Engrain the dependencies into contained, independently deployable pieces, and make it so that app-level code can evolve without bringing down the world with it. Make it easy to revert back, and/or utilize phased rollouts, and you've got the ability to iterate quickly, keep pace with external dependencies, and it no longer has to be some scary thing that requires big back-and-forth meetings over mundane details.
(As for software that releases often, maybe it's an over-correction, but there's a reason things don't work as they did in the glory days, and that's because they were never really that glorious.)
This doesn't necessarily rule out the expertise of systems administration, because the platforms for all of this need to be built & maintained, and there's still a lot of work to be done on network boarder security, etc. It's a movement that focuses systems administration to systems administration, instead of having to be this big org arbiter of microdecisions, and all the baggage that goes along with trying to be the gatekeeper of all.
> Why should the version of a lib in one part of the software somehow pose existential threat to the infrastructure?
Because that's how software developers wrote every dominant packaging system :P
There are tradeoffs to self-contained units. Disk space isn't so much of a practical concern these days, but security is very real: with a dozen apps, you could be at the mercy of a dozen different entities to update their embedded OpenSSL libraries.
this is the core of the problem between Devs and sysadmins. Sysadmins come from a mindset of a polished working system which never needs to change. They deliver stability and reliability to the business.
Devs come from a mindset to actively create change. This is to add new features and deliver new value and product to the business. As a Dev I do have to say that many Devs don't have enough experience in operations to understand properly how to help sysadmins, many don't understand the complexities of that job.
These two perspectives are at odds, and they should be. The new tools, like docker, start giving everyone what they want... Devs pick their dependencies, and in theory, can't stomp on the sysadmins pristine environment.
To respond directly to your question: because there are new things available in new libraries that allow us to develop new features!
The new things you need to develop new features are far and far between
99% of web software written these days could fulfil identical use cases on an IBM 3270 from 40 years ago. You enter something into a form and it gets stored in a database. You enter something into a field and it generates a report. That's all Amazon, Facebook, Google, any e-commerce site are.
Sure it might be nice to use a new version of that new JS framework that all the twitterati are going crazy about, but does it deliver value to the business that justifies the risk and investment?
And yet none of those things did arise 40 years ago. All of the nuances of all the code written since then make a difference, despite duplicating "identical use cases".
People could do great things just with punch cards, yet somehow technology kept marching on.
If developers want to use newer stuff usually they have a good reason. The ability to hack around the deficiencies of old dependencies does not mean that one couldn't get a better, cheaper solution with newer technology.
> People could do great things just with punch cards, yet somehow technology kept marching on.
That's not the situation I've described - punch cards disqualify.
The situation I mean is where developers insist on writing software on version X, which doesn't compile on X-1 and is buggy on version X (and might not compile again on X+1). For a concrete example, new C++ features that aren't correctly implemented and lead to harder to read code and worse error messages when applied to day-to-day problems (which these features were never meant for).
If it is as you say, then why upgrade ever? How would we even discover bugs in software until it is used?
To have progress we need to change things. When we change things, we may break things, regardless of tests.
To quote deijkstra: "testing can be a very effective way of showing the presence of bugs, but it is hopelessly inadequate to show their absence". From 'the humble programmer'.
Production is the only way to eventually discover the stability of any software, even with 100% test coverage. It's a necessary evil in the support of progress.
If you run into a bug or problem with a 3rd party component (open source library, commercial tool, whatever), one of the first things they are going to ask you to do is upgrade. The fact you're on an old version of some library is an easy (and sometimes correct) scapegoat for problems.
Put yourself in the 3rd party's shoes: if you spend a bunch of time trying to fix a problem that turns out to be a bug in a separate library that's already been fixed, that's entirely wasted time.
The same goes for direct usage: you're likely to spend time fixing problems that have already been fixed.
Upgrading the version of the library wouldn't be a problem if the concept of stable ABI's were as prevalent as it was 15 years ago. Back then, the major.minor version number system was used as a signal that it was safe to upgrade to a newer version of a library without worrying that the entire application stack was going to come falling down around itself because the developer of said library decided to rework some part of the library without providing any backwards compatibility.
Put another way, a sysadmin could feel confident that moving from 1.52->1.53 would be a painless and transparent operation and that the provider of said library would continue to release 1.x branches with little ABI changes for some length of time. The expectation was that at some point the library provider would release a 2.0, which would require a more careful testing/deployment schedule likely with other upgrades to the system.
Today, that is all out the window, very few open source projects (and its infecting the commercial software too) provide "stable" branches. The agile, throw out the latest untested version mentality is less work than the careful plan/code/test/release, followed by fix/test/release, cycles.
This is a major rant of mine, as upgrading the vast majority of open source libraries usually just replaces one set of problems with another. Having been on the hook for providing a rock solid stable environment for critical infrastructure (think emergency services, banks, power plants, etc) I came to the conclusion that for many libraries/tools you had better be prepared to fix and backport bug fixes yourself unless you were solely relying on only libraries shipped in something like RHEL/SLES (and even then if you wanted it fixed fast, you had better be prepared to duplicate/debug the problem yourself).
> Put another way, a sysadmin could feel confident that moving from 1.52->1.53 would be a painless and transparent operation
This is what Semantic Versioning [1] aims to achieve, but as you highlighted, it still requires the maintainer(s) of the project to actually deliver stable software, regardless of what the version is. I think some people took "move fast and break things" a bit too literally.
A project following SemVer and that has good automated test coverage is definitely on the right track though, and in generally should be a pretty safe upgrade (of course it's important to know their track record).
"Move fast and break things ... in a separate branch with continuous integration running an extensive test suite" isn't quite as catchy but is what should be happening.
> The same goes for direct usage: you're likely to spend time fixing problems that have already been fixed.
That depends on whether it's a feature or a fix release. Feature releases might or might not include bug fixes, but they typically include new bugs. I welcome localized fixes, however they are not as common because of constrained resources. (Fix releases is the idea behind Debian stable. Of course it only works to an extent).
A different perspective, I prefer to have the bugs that I already know, and know not to trigger.
Because those libraries have bugs, sometimes catastrophic ones. Sometimes they must update, due to API changes or other factors outside of their control. If your organization relies on keeping things static as a means to stability, one day that rule will have to break, and you may be pretty underprepared for it.
(As for software that releases often, maybe it's an over-correction, but there's a reason things don't work as they did in the glory days, and that's because they were never really that glorious.)
This doesn't necessarily rule out the expertise of systems administration, because the platforms for all of this need to be built & maintained, and there's still a lot of work to be done on network boarder security, etc. It's a movement that focuses systems administration to systems administration, instead of having to be this big org arbiter of microdecisions, and all the baggage that goes along with trying to be the gatekeeper of all.