That is not the only thing that matters. There is some interesting psychology at play here.
One thing I see rarely discussed is the fact you are getting killed by a programming error, not a human. Of course, the humans made the said programming error, but they have no face, there is nobody (individual) to blame, no-one to apologise, no-one to mourn for their actions.
At least if someone is hit by another driver, they will be prosecuted in the case of negligence, or if it's due to a mistake, be able to apologise. It's human. It's different.
I don't know. People still fly despite the occasional mechanical error, programming error, or other non-individual error leading to fatalities.
I think if it is an order of magnitude safer that would override many of those emotional concerns. Insurance companies are decent at thinking rationally about problems. People might have emotional concerns, but I think a halved auto insurance bill would be enough to sway a critical majority of people.
> Of course, the humans made the said programming error
Are you sure?
You do understand that self-driving vehicles inherently rely upon machine learning systems (and I am not talking simple regression here), which are trained and ultimately make decisions by means that we still don't fully understand?
What I'm trying to get it here is that, barring faulty sensors or mechanical systems, any decision that the control system of a self-driving vehicle makes is most likely arrived at via the inner workings of one or more machine learning systems (likely deep learning neural networks of some nature, but there are other systems being experimented with as well).
They are not a huge series of if-then-else statements or anything of that nature; such algorithmic approaches have been tried in the past and dismissed as unworkable due to the sheer complexity that is inherent in driving.
Honestly, what drove the field forward was mainly work done by CMU, mainly ALVINN; but there was interplay between CMU, Hans Moravec, Stanford (the Cart, among others), Thrun, and others. It has a very long, convoluted, but detailed and rich history (I encourage anyone who's interested in the fields of robotics, ai/ml, and/or self-driving vehicles to read up on it - it's very fascinating). ALVINN ultimately gave the hint toward using neural networks and deep-learning, but the tech couldn't make the leap forward until the convergence of a number of technologies.
I know this. You know this. But we are in a technology bubble. Machine learning also has no face to the regular public. Telling to the public that AI killed their relative is even worse than saying the team at Google did it.
One thing I see rarely discussed is the fact you are getting killed by a programming error, not a human. Of course, the humans made the said programming error, but they have no face, there is nobody (individual) to blame, no-one to apologise, no-one to mourn for their actions.
At least if someone is hit by another driver, they will be prosecuted in the case of negligence, or if it's due to a mistake, be able to apologise. It's human. It's different.