I'm not trying to be overly dramatic, but I think it's precisely when the industry accepted this as a law -- instead of treating it as something that needs be trained out of junior programmers -- is when software quality started to tank.
Consider any other form of engineering. Take some kind of screw. It has a documented specification in terms of torque, material strength and whatnot. Good engineers on the customer side will use the screws in a way that keeps within the spec. And good engineers on the supplier side will find ways to fulfill that specification as cheaply (which usually also means as narrowly) as possible.
There could be a kind of Hyrum's Law at play, if hardware engineers were idiots. Let's say that the screws accidentally overfulfil the specification today by 20%, and a customer measures the material to figure this out, and starts to depend on that. A year later, the supplier finds a cheaper way to produce the screws (or introduces a binning process) and as a consequence, the screws only exceed the spec by 5%, and the customer's product breaks. Who's responsible? The customer. And I should add, obviously.
This is fixed by training engineers. No one in their right mind would start to introduce artificial faults into their screws purely for the reason to prevent users from depending on the excess strength or something. But that's exactly the kind of thing that's regularly suggested to guard against Hyrum's Law.
So whenever I see software developers on my team look at the source code of a library to figure out "whether it's thread-safe" or "whether the sort order is stable" or whatever, I die a little on the inside. The problem is, you almost have to do that, because were two generations of software engineers into this and the practice is so accepted now that libraries no longer bother to document what they do (and don't) guarantee.
And then people wonder why every existing piece of software needs a fully staffed team nowadays just to keep it working. And why as a consequence, even core products built by competent companies (like Google Maps) have regressions in 15-year-old features every other week.
Yes, the relationship between "Hyrum's Law" and having a terrible documentation culture is strong.
The important part of the "Law" is the bit where it says "it does not matter what you promise in the contract".
That's definitely going to be true in a place where neither writing nor reading documentation is taken seriously (and one of the main things this site teaches us is that Google is such a place).
Yeah, no, humans can't hold complicated documentation in their heads. The whole experience of being human is finding a way to get by with a simple model which fits in your head but isn't wrong enough to cause trouble, and that's exactly what Hyrum is reflecting.
What you're getting at is the C++ "Just don't make any mistakes" approach to software engineering, which is a disaster that has cost our civilisation a great deal.
That is emphatically not what I am "getting at". I cannot express my opposition to that approach strongly enough.
I do not believe that the place people should hold documentation is "in their heads".
I do not believe that Hyrum's "Law" is helpful in getting to a situation where people's beliefs about what software will do match reality.
This isn't the first time I've come across people (particularly around the Rust community) who have got the idea into their heads that saying "reading documentation is important" is somehow close to saying "Real programmers don't make mistakes". I think that conflation is doing great harm.
If you accept that people are going to rely on things you didn't contract for by mistake then you're right back to Hyrum's Law. It's that easy.
Hyrum's Law isn't about what you should do, or how things should be, it's telling you an observable fact about our world that some people don't like. So no, the law isn't going to fix people's beliefs any more than Newton's Laws did for weird beliefs about motion.
I'd actually say the importance of documentation is better understood in the Rust community. Including, which is vital here, the importance of not relying on humans remembering to read all this text when you've got better options. Rust has documentation telling you that you're not promised whether or not some elements which compare equal are swapped when you [T]::sort_unstable. But it doesn't need to spend a lot of time warning you that that you shouldn't [T]::sort_unstable a slice of type T which doesn't even claim to have ordering, because the compiler rejects such nonsense anyway.
Indeed even the naming is an example. In C++ that function is just named sort. Because you know, an unstable sort is faster†. Will it sometimes surprise some poor noob because it's unstable? Sure, but apparently that's OK because if they had read and properly digested the documentation they would know it's an unstable sort. I suggest if the function were named better the user is much less likely to make this mistake before they even glance at the documentation.
> Hyrum's Law isn't about what you should do, or how things should be, it's telling you an observable fact about our world
But it isn't.
Neither Hyrum, nor anybody else, has ever seen a system where "all observable behaviors" were depended on by somebody. And if they somehow had, they couldn't know that it hadn't mattered how clear the documentation had been.
There are two much weaker statements which I think are true:
- "No matter how carefully you document your contracts, it will happen from time to time that you leave something unstated and people reasonably guess wrongly what you intended."
- "No matter how carefully you document your contracts, from time to time some people will choose to rely on things you didn't promise, without caring about that."
As well as actually being true, these statements have the advantage of not falsely implying that you can't improve the situation by putting effort into documentation.
Surely your statements are just corollaries which are noticeable with fewer users ? Hyrum's Law is more succinct because of its prefix, "With a sufficient number of users".
And I think we do see small systems where all observable behaviors are indeed depended upon, lots of trivial systems exhibit exactly this property, it's just it doesn't trigger the part of Hyrum's Law that apparently annoys you - "it does not matter what you promise in the contract" because if anybody did write a contract it would state the entire behavior, no surprises are possible.
And that permits a valuable conclusion from Hyrum's law. It's better to design my interface so that it's so simple any fool will use it right, than to document all the weird sharp edges of my interface so that I can potentially win an "Um, actually" episode each time a fool cuts themselves on the sharp edges. That's not always possible but often in our industry it's apparent nobody was even trying.
Laws of nature are not prescriptive: they only predict what will happen. It is up to us to fend off consequences we don't want, by whatever means we can muster. The law means that calling out non-promises in the name is favored. But that is often not practical.
> it's precisely when the industry accepted this as a law -- instead of treating it as something that needs be trained out of junior programmers -- is when software quality started to tank.
And precisely when exactly do you think that happened, when do you think there was this golden age where people actually relied on only documented behavior?
Windows and Linux both have been bending backwards to retain all sorts of undocumented features since early/mid 90s. Glibc was notoriously hampered by emacs relying on some details of its internals. C programs relying on implictly or explicitly undefined behavior has caused endless handwringing as long as there has been a C standard. The list goes on and on. Relying on implementation details has been the modus operandi in computing since day 1.
Consider any other form of engineering. Take some kind of screw. It has a documented specification in terms of torque, material strength and whatnot. Good engineers on the customer side will use the screws in a way that keeps within the spec. And good engineers on the supplier side will find ways to fulfill that specification as cheaply (which usually also means as narrowly) as possible.
There could be a kind of Hyrum's Law at play, if hardware engineers were idiots. Let's say that the screws accidentally overfulfil the specification today by 20%, and a customer measures the material to figure this out, and starts to depend on that. A year later, the supplier finds a cheaper way to produce the screws (or introduces a binning process) and as a consequence, the screws only exceed the spec by 5%, and the customer's product breaks. Who's responsible? The customer. And I should add, obviously.
This is fixed by training engineers. No one in their right mind would start to introduce artificial faults into their screws purely for the reason to prevent users from depending on the excess strength or something. But that's exactly the kind of thing that's regularly suggested to guard against Hyrum's Law.
So whenever I see software developers on my team look at the source code of a library to figure out "whether it's thread-safe" or "whether the sort order is stable" or whatever, I die a little on the inside. The problem is, you almost have to do that, because were two generations of software engineers into this and the practice is so accepted now that libraries no longer bother to document what they do (and don't) guarantee.
And then people wonder why every existing piece of software needs a fully staffed team nowadays just to keep it working. And why as a consequence, even core products built by competent companies (like Google Maps) have regressions in 15-year-old features every other week.