Objectively not true....Autopilot is only used in a small subset of driving situations that are less risky than overall driving. And yet in the last 8 months alone, Autopilot has caused 3 easily avoidable deaths.
Autopilot is getting worse not better with time, and Tesla's tendency to introduce regressions (as with the Bay Area crash) means any temporary improvements in driving AI are just that--temporary.
> And yet in the last 8 months alone, Autopilot has caused 3 easily avoidable deaths.
This number by itself is absolutely meaningless without more context.
In 2017, 37,133 people in the United States died in an automotive crash. It is expected that some number of Tesla owners will die in a crash, regardless of currently engaged safety features.
Tesla has produced around 600,000 vehicles in total, including over 100,000 Model 3 vehicles.
I'm not making any claim or dispute regarding the effectiveness of Autopilot, whether or not it's improving. I'm just saying, making that statement you made is meaningless at least and misleading at most.
You made several bold claims. Do you have any proof of any of them? The statistic you provided is useless without context. In the last 8 months there were ~24,000 automobile fatalities in the US. What's important is how Tesla Autopilot compares to that.
Also, do you have any proof that Autopilot is getting worse over time that doesn't rely on anecdotes?
I don't need to prove anything, since I'm not claiming that Teslas are safer than all other cars in all other conditions like Tesla and Elon Musk are. They need to prove their claims, and resorting to spurious comparisons to "miles driven" under conditions in which Autopilot was "working but not actually controlling the car" don't count.
I'm sorry, but everything that you said in relation to Tesla there is wrong. My Nissan with ProPilot can also control the throttle, brake, and steering, but that doesn't make me a mere passenger any more than cruise control does.
It routinely false positives on frontal collisions and occasionally does things that would be dangerous if I weren't driving.
The driver in a Tesla is not a mere passenger, regardless of whether or not they have enabled autopilot. The initial warnings on enabling autopilot as well as the attention nags on the steering wheel all make this perfectly clear to the driver.
Weird, given Tesla's claim that with Autopilot "the person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself." [1]
The problem is how the cause deaths are perceived. An imperfect (aka "randomly kills you") AI is far scarier than a statistic that people interpret as "x% of bad drivers". Most people think they are better than the average driver. Not to mention that a lot of deaths are due to bad other drivers.
It's why people freak out about terrorism more than they do about car crashes or drug addicition.
Even bad drivers are very, very unlikely to get in any crashes so long as they're not intoxicated or distracted. Most people will probably never get in a crash (that is their fault) in their life.
> when used properly by an attentive driver who is prepared to take control at all times, drivers supported by Autopilot are safer than those operating without assistance. For the past three quarters we have released quarterly safety data directly from our vehicles which demonstrates that.”
Tesla attached a lot more caveats than you did in order to assert superior safety.
There is no reliable citation that you can give to prove that.
Emphasis on reliable, as Tesla's own stats are highly misleading -- they basically compare highway data, because Autopilot is mostly used on highways, to all other conditions in which accidents happen.
I'm sure that if Autopilot was active all the time, the rate of accidents would be far higher, and Tesla's numbers wouldn't look so positive.
I think it is likely that certain aspects of the system, such as emergency braking, improve safety, but it may be that the more advanced features (those that actually allow the driver to get away with not paying attention, most of the time) do not. If this is the case -- and I am not saying it is, we need more data -- then the effectiveness of the former should not be used to cover up shortcomings in the latter.
If, as Tesla insists, these autopilot crashes are all due to the driver not paying attention, then it seems very plausible that a better attention-detecting system (one that tracked the driver's gaze, for example) would improve safety. Of course, if the system required drivers to do what Tesla says they should be doing, it would not be so appealing.
And if, as Musk is saying, drivers are confused or mistaken about the capabilities of the vehicle, then the first response should be to stop making ambivalent statements about them.
I don't think engineers should be using logic like this.
Say you stumble on a lost civilization in the woods, and they have yet to discover any form of shelter. Any time it rains or snows they all sit on the ground and half of them die of pneumonia. What do we, as engineers, do to help them? Tie some sticks together and make a lean-to? That would certainly reduce the death rate. But I don't think it would be the ethical thing to do. The ethical thing would be to help them build proper buildings out of the best available materials, install an HVAC system, fire alarms, etc.
In other words, engineering should seek to maximize benefits for users, and there are cases where choosing not to maximize benefits could be seen as unethical, even if you do provide some benefit.
In the case of Tesla, the ethical thing to do would be to include a lidar which can prevent the car from smashing into large objects at full speed.
Just because the accident rate is lower does not mean this death or all the others are acceptable. Tesla had a choice of whether to use best engineering practices, and they chose not do.
The reason they are doing this is money. They couldn't sell cars with an expensive lidar on it. And they shot themselves in the foot by charging people thousands of dollars for future "full self driving" capability. Now if they don't continue forward with this plan, with no lidar, it will have serious financial consequences for them.
Note that other self-driving companies have found a way to make lidar financially viable for them. It's not impossible.
That all seems like a big stretch based upon some big assumptions to me. It is far from conclusive that LIDAR alone would make the difference in cases like this, due to the way sensors have to be fused together to handle these kinds of scenarios.
There are fair criticisms around Tesla's marketing and implementation, but I don't think "they should already have lidar on there" is really one of them.
Giving a nomadic tribe a building with HVAC and fire alarms would be unethical. If you then require them to live in those buildings you would meet the standard of cultural assimilation and destruction, as Canada did to the Inuit. As an engineer I find this reprehensible and arrogant.