Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Note that "Don't make mistakes" is no more actionable for maintenance of a huge cargo ship than for your 10MLoC software project. A successful safety strategy must assume there will be mistakes and deliver safe outcomes nevertheless.


Obviously this is the standard line any disaster prevention, and makes sense 99% of the time. But what's the standard line about where this whole protocols-to-catch-mistakes thing bottoms out? Obviously people executing the protocol can make mistakes, or fall victim to normalization of deviance. The same is true for the next level of safety protocol you layer on top of that. At some level, the only answer really is just "don't make mistakes", right? And you're mostly trying to make sure you can do that at a level where it's easier to not make mistakes, like simpler decisions not under time pressure.

Am I missing something? I feel like one of us is crazy when people are talking about improving process instead of assigning blame without addressing the base case.


Normalization of deviance doesn't happen through people "making mistakes", at least not in the conventional sense. It's a deliberate choice, usually a response to bad incentives, or sometimes even a reasonable tradeoff.

I mean ultimately establishing a good process requires make good choices and not making bad ones, sure. But the kind of bad decisions that you have to avoid are not really "mistakes" the same way that, like, switching on the wrong generator is a mistake.


Quite, normalization is another failure mode, besides simple mistakes, that process has to account for.


It kind of is though. There's a lot less opportunity for failures at the limit and unforeseen scale. Mechanical things also mostly don't keel over or go haywire with no warning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: