Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For those not on FB:

Link: https://www.scientificamerican.com/article/no-one-can-explai...

"We often hear that AI systems must provide explanations and establish causal relationships, particularly for life-critical applications. Yes, that can be useful. Or at least reassuring.

But sometimes people have accurate models of a phenomenon without any intuitive explanation or causation that provides an accurate picture of the situation. In many cases of physical phenomena, "explanations" contain causal loops where A causes B and B causes A.

A good example is how a wing causes lift. The computational fluid dynamics model, based on Navier-Stokes equations, works just fine. But there is no completely-accurate intuitive "explanation" of why airplanes fly. Is it because of Bernoulli principle? Because a wing deflects the air downwards? Because the air above the wing want to keep going straight but by doing so creates a low-pressure region above the wing that forces the flow downwards sucks the wing upwards? All of the above, but none of the above by itself.

Now, if there ever was a life-critical physical phenomenon, it is lift production by an airliner wing. But we don't actually have a "causal" explanation for it, though we do have an accurate mathematical model and decades of experimental evidence.

You know what other life-critical phenomena we don't have good causal explanations for? The mechanism of action of many drugs (if not most of them). An example? How does lithium treat bipolar disorder? We do have considerable empirical evidence provided by extensive clinical studies.

This is not to say that causality is not an important area of research for AI. It is. But sometimes, requiring explanability is counterproductive."





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: