Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How would you design your bureaucracy so that this kind of thing can't happen? I see this type of failure all the time in organizations big and small. Sometimes things are just too complex to have an auteur that can understand the entire system and when every department strives to optimize for its specific goal shit can really hit the fan.


Don't disincentivize defect reporting?

Don't restrict your definition of "actionable" to only pertain to fixes that don't come with a monetary cost?

Don't get rid of your Quality people. Definitely don't get rid of them for raising too many defects.

Don't stop focusing on "the box" (I.e. the plane) because customers already assume it will be "high quality", and reengineer a physical engineering/design firm into some ungodly act of "financial innovation".

Don't treat regulations as something to be worked around.

Don't skimp on Acceptance Testing of outsourced software deliverables.

Make sure your CEO and Sales staff understand there are things you can not (and should not) sell.

Listen to your Unions. Don't try to work around them.

These aren't hard. They are all also things that by not doing them, Boeing set the stage for this cascaded failure of epic proportions.

Pity that American manufacturing and Engineering firms never (in my experience) took Edward W. Demmings seriously. His 14 points are a hell of a good start.


Eventually, artificial intelligence.

Maybe not in the near future, but as technology progresses and every manufacturer strives to optimize their designs with the latest features, it will become an unsourmountable task to oversee every aspect of it (efficiently). I'm not talking about actively designing, but rather for warning/flagging for potential error. In very complex enterprises like global transport or building skyscrappers there is a lot to learn from experience and little human time, but it might be very cost-efective to train all-observing self-learning AI to look over everyone's shoulder, and warn you about using the right type of bolts, or how the coming heavy rains in Guatemala might affect your supply chain.

It's not that far-fetched when you realize it doesn't need to really understand anything, just be very good at playing word association and micromanaging.


Preventing problems like this seems to me to be what AI as we know it is least suited to - failures of human organizations generally seem to be failures to choose the right context due to no individual grasping it, and my impression of AI today is that it doesn't even engage with the problem. You can have a computer program that recognizes cats, or plays Go, but nobody even thinks about "how do we make this same program spontaneously respond to a spilt glass of milk, or a hostage situation, or a fire alarm...", let alone take into account everything in the world while doing so. It feels to me like people have tunnel vision that is getting worse, and the "intelligent" software is inheriting that.


AI is only as good as its training data and goals/success conditions.


Yes, AI is designed in very particular situations to fit ver specific tasks. I never meant there to be a single mind controlling the whole world. Today you could program an assisting AI that told you when "you missed a spot" when painting your house. It's simply not cost effective. But eventually driving and medical diagnosis AI, while imperfect, will have a better success rate that humans. Do you really think that won't apply to industrial production eventually, say in a hundred years?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: