Here's two major situations that the NHTSA was involved in that affected Ford, Toyota, Honda, and other non-Tesla companies
----
"On March 6, 2000, NHTSA began a preliminary inquiry and on May 2, NHTSA began an investigation (PE00-020) concerning the high incidence of tire failures and accidents of Ford Explorers and other light trucks and SUVs fitted with Firestone Radial ATX, ATX II, and Wilderness tires"
"On June 23, 2014, auto manufacturers BMW, Chrysler, Ford, Honda, Mazda, Nissan, and Toyota announced they were recalling over three million vehicles worldwide due to Takata Corporation-made airbags. The reason was that they could rupture and send debris flying inside the vehicle. This was in response to a US National Highway Traffic Safety Administration (NHTSA) investigation that was initiated after the NHTSA received three injury complaints."
-----
Just because this is a current investigation and Elon Musk likes to play the victim while being one of the richest people in the world, doesn't mean there's some bias against Tesla.
Oh, that was huge. Airbags are normally powered by sodium azide. But, to reduce costs, Takata tried to use ordinary nitrate-based explosives, which are less stable. That did not end well.
I don't know the details other than it has to do with the long term climate the car was in.
I posted because many people now simply claim bias or "it's political" whenever they are investigated by a government organization. If we allow that to happen then it's a free pass for corruption to flourish.
Lots of cars have been in the news when it is suspected that defects have caused crashes. Pick a manufacturer and they’ve probably had a model marred by safety scandal.
100,000 regular cars didn't make the decision to kill their occupants though.
Drunk drivers kill people so we try to prevent drunk driving, If cars are occasionally killing people due to a design flaw they should be investigated.
Don't worry though - Tesla will say that autopilot deactivated 10ms before impact and that it's all the drivers fault and people will go back to ignoring it
If drivers make deadly mistakes at X rate, and software makes deadly mistakes at Y rate where Y < X, should we recall the software? What if Y is substantially less than X but still not zero?
The complication with that is that software makes different mistakes than drivers do. And sometimes, software refuses to drive, in which case, their accident rates are subject to selection bias. For cars which can be operated by software or a driver, you’d want to compare for an exact equivalent scenario.
The problem is Y is more uniform but X is not. Good drivers make significantly fewer mistakes than the average. For these people, using software potentially increases risk by order of magnitudes.
The autopilot may deactivate itself in such a situation, it tends to not like uncertainty, but as far as I know everyone will still consider that a crash under autopilot.