I upgrade all dependencies every time I deploy anything. If you don't, a zero day is going to bite you in the ass: that's the world we now live in.
If upgrading like that scares you, your automated testing isn't good enough.
On average, the most bug free Linux experience is to run the latest version of everything. I wasted much more time backporting bugfixes before I started doing that, than I have spent on new bugs since.
Maybe your codebase is truly filled with code that is that riddled with flaws, but:
1) If so, updating will not save you from zero days, only from whatever bugs the developers have found.
2) Most updates are not zero day patches. They are as likely to (unintentionally) introduce zero days as they are to patch them.
3) In the case where a real issue is found, I can't imagine it isn't hard to use the aforementioned security vendors, and use their recommendations to force updates outside of a cooldown period.
If you mean operating system code, that is generally opaque, and not quite what the article is talking about (you don't use a dependency manager to install code that you have reviewed to perform operating system updates - you can, and that is fantastic for you, but not I imagine what you mean).
Although, even for Operating Systems, cooldown periods on patches are not only a good thing, but something that e.g. a large org that can't afford downtime will employ (managing windows or linux software patches, e.g.). The reasoning is the same - updates have just as much chance to introduce bugs as fix them, and although you hope your OS vendor does adequate testing, especially in the case where you cannot audit their code, you have to wait so that either some 3rd party security vendor can assess system safety, or you are able to perform adequate testing yourself.
Upgrading to new version can also introduce new exploits, no amount of tests can find those.
Some of these can be short-lived, existing only on a minor patch and fixed on the next one promptly but you’ll get it if you upgrade constantly on the latest blindly.
There is always risks either way but latest version doesn’t mean the “best” version, mistakes, errors happens, performance degradation, etc.
Personally, I choose to aggressively upgrade and engage with upstreams when I find problems, not to sit around waiting and hoping somebody will notice the bugs and fix them before they affect me :)
> I upgrade all dependencies every time I deploy anything. If you don't, a zero day is going to bite you in the ass: that's the world we now live in.
I think you're using a different definition of zero day than what is standard. Any zero day vulnerability is not going to have a patch you can get with an update.
Only if you already upgraded to the one with the bug in it, and then only if you ignore "this patch is actually different: read this notice and deploy it immediately". The argument is not "never update quickly": it is don't routinely deploy updates constantly that are not known to be high priority fixes.
But that isn't what you said? ;P "f you wait seven days, you're pointlessly vulnerable." <- this is clearly a straw man, as no one is saying you'd wait seven days to deploy THAT patch... but, if some new configuration file feature is added, or it is ported to a new architecture you aren't using--aka, the 99.99% of patches--you don't deploy THOSE patches for a while (and I'd argue seven days is way way too small) until you get a feel that it isn't a supply chain attack (or what will become a zero day). Every now and then, someone tries to fix a serious bug... most of the time, you are just rolling the die on adding a new bug that someone can quickly find and exploit you using.
> this is clearly a straw man, as no one is saying you'd wait seven days to deploy THAT patch...
The policy being proposed is that upgrades are delayed. So in a company where that policy was enforced, I would be required to request an exception to the policy for your hypothetical patch.
That's unacceptable for me. That's requiring me to do extra work for a nebulous poorly quantified security "benefit". It's a waste of my time and energy.
I'm saying the whole policy is unjustified and should never be applied by default. At all. It's stupid. Its harmful for zero demonstrable benefit.
I'm being blunt because you seem determined to somehow misconstrue what I'm saying as a nitpicky argument. I'm saying the whole policy is terrible and stupid. If it were forced on me by an employer, I would quit. Seriously.
Renovate (dependabot equiv I think) creates PRs, I usually walk through them every morning or when there's a bit of downtime. Playing with the idea to automerge patches and maybe even minor updates but up until now it's not that hard to keep up.
I’ve seen a lot of CI/CD setups and I’ve never seen that. If that were common practice, it would certainly simplify the package manager, since there would be no need for lockfiles!
I do see some CI running without lockfiles, and there's still a contingent that believes that libraries should never commit their lockfiles. It's a reasonably good idea to _test_ a configuration without the lockfile, since any user of your dependency is using _their_ lockfile that their local solver came up with, not yours, but this ought to be something you'd do alongside the tests using the lockfile. So locking down the CI environment is a good idea for that and many other reasons.
Realistically, no one does full side-by-side tests with and without lockfiles, but it's a good idea to at least do a smoke test or two that way.