Neither, you’re reading it wrong. Think of it as codebases getting more reliable over time as they accumulate fixes and tests. (As opposed to, say, writing code in NodeJS versus C++)
Age of Code does not automatically equal quality of code, ever. Good code is maintained by good developers. A lot of bad code is pushed out by management, and other situations, or just bad devs. This is a can of worms you're talking your way into.
You're using different words - the top comment only mentioned the reliability of the software, which is only tangentially related to the quality, goodness, or badness of the code used to write it.
Old software is typically more reliable, not because the developers were better or the software engineering targeted a higher reliability metric, but because it's been tested in the real world for years. Even more so if you consider a known bug to be "reliable" behavior: "Sure, it crashes when you enter an apostrophe in the name field, but everyone knows that, there's a sticky note taped to the receptionist's monitor so the new girl doesn't forget."
Maybe the new software has a more comprehensive automated testing framework - maybe it simply has tests, where the old software had none - but regardless of how accurate you make your mock objects, decades of end-to-end testing in the real world is hard to replace.
As an industrial controls engineer, when I walk up to a machine that's 30 years old but isn't working anymore, I'm looking for failed mechanical components. Some switch is worn out, a cable got crushed, a bearing is failing...it's not the code's fault. It's not even the CMOS battery failing and dropping memory this time, because we've had that problem 4 times already, we recognize it and have a procedure to prevent it happening again. The code didn't change spontaneously, it's solved the business problem for decades... Conversely, when I walk up to a newly commissioned machine that's only been on the floor for a month, the problem is probably something that hasn't ever been tried before and was missed in the test procedure.
Yup, I have worked on several legacy codebases, and a pretty common occurence is that a new team member will join and think they may have discovered a bug in the code. Sometimes they are even quite adamant that the code is complete garbage and could never have worked properly. Usually the conversation goes something like: "This code is heavily used in production, and hasn't been touched in 10 years. If it's broken, then why haven't we had any complaints from users?"
And more often than not the issue is a local configuration issue, bad test data, a misunderstanding of what the code is supposed to do, not being aware of some alternate execution path or other pre/post processing that is running, some known issue that we've decided not to fix for some reason, etc. (And of course sometimes we do actually discover a completely new bug, but it's rare).
To be clear, there are certainly code quality issues present that make modifications to the code costly and risky. But the code itself is quite reliable, as most bugs have been found and fixed over the years. And a lot of the messy bits in the code are actually important usability enhancements that get bolted on after the fact in response to real-world user feedback.
Old software is not always more reliable though, which is my point. We can all think of really old still maintained software that is awful and unreliable. Maybe I'm just unlucky and get hired at places riddled with low quality software? I don't know, but I do know nobody I've ever worked with is ever surprised, only the junior developers.
Reality is management is often misaligned with proper software engineering craftsmanship at every org I've worked at except one, and that was because the top director who oversaw all of us was also a developer and he let our team lead direct us whichever way he wanted us to.
Old code that has been maintained (bugfixed), but not messed with too much (i.e. major rewrites or new features) is almost certain to be better than most other code though?
"Bugfixes" doesn't mean the code actually got better, it just means someone attempted to fix a bug. I've seen plenty of people make code worse and more buggy by trying to fix a bug, and also plenty of old "maintained" code that still has tons of bugs because it started from the wrong foundation and everyone kept bolting on fixes around the bad part.
One of frustrating truths about software is that it can be terrible and riddled with bugs but if you just keep patching enough bugs and use it the same way every time it eventually becomes reliable software ... as long as the user never does anything new and no-one pokes the source with a stick.
I much prefer the alternative where it's written in a manner where you can almost prove it's bug free by comprehensively unit testing the parts.
It actually might. Older code running in production is almost automatically regression tested with each new fix. It might not be pretty, but it's definitely more reliable for solving real problems.
The list of bugs tagged regression at work certainly suggests it gets tested... But fixing those regressions...? That's a lot of dev time for things that don't really have time allocated for them.
I think we all agree that the quality of the code itself goes down over time. I think the point that is being made is that the quality of the final product goes up over time.
E.g. you might fix a bug by adding a hacky workaround in the code; better product, worse code.
The author didn't mean that an older commit date on a file makes code better.
The author is talking about the maturity of a project. Likewise, as AI technologies become more mature we will have more tools to use them in a safer and more reliable way.
I've seen too many old projects that are not by any means better no matter how much they get updates because management define priorities. I'm not alone in saying I've been in a few projects where the backlog is rather large. When your development is driven by marketing people trying to pump up sales, all the "non critical" bugs begin to stack up.
Survivorship bias is real, but is missing the important piece of the story when it comes to software, which doesn't just survive but is also maintained. Sure you may choose to discard/replace low quality software and keep high quality software in operation, which leads to survivorship bias, but the point here is that you also have a chance to find and fix issues in the one that survived, even if those issues weren't yet apparent in version 0.1. Author is not trying to say that version 0.1 of 30 year old software was of higher quality than version 0.1 of modern software -- they're saying that version 9 of 30 year old software is better than version 0.1 of modern software.
In my experience actively maintained but not heavily modified applications tend towards stability over time. It don't even matter if they are good or bad codebases -- even a bad code will become less buggy over time if someone is working on bug fixes.
New code is the source of new bugs. Whether that's an entirely new product, a new feature on an existing project, or refactoring.
well, yes, exactly. I'm not trying to claim that old code is more reliable just because it was written a long time ago, I'm making the claim that old code is more reliable because of the survivorship bias. If code was first written 20 years ago and is still in production, unchanged, I can be relatively certain there's no stop-the-world bugs in those lines. (this says nothing about how pretty the code is, though).