Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tesla’s Autopilot was engaged when Model 3 crashed into truck, report states (theverge.com)
59 points by Tomte on May 16, 2019 | hide | past | favorite | 82 comments


"our data shows that, when used properly by an attentive driver who is prepared to take control at all times, drivers supported by Autopilot are safer than those operating without assistance"

This is nonsense. The whole core of the argument against this sort of system is that it induces inattention. The fact that Tesla consistently refuses to share full data on identical models driving with autopilot turned on, enabled but not turned on, and not enabled is extremely suggestive that the overall safety rates for autopilot on are not better.

(edited for formatting)


That might very well be the case.

Still we as a society usually decide not ban things that hurt only irresponsible people. There is an exception to this for children, but they are banned from driving already.

If the autopilot crashes start resulting in a number of deaths outside of the drivers car that might shift the public opinion.


Lawn Darts only killed one person before they were banned, I think Autopilot is at 3 or 4 and counting, and Uber's driverless system has killed at least one pedestrian.


Never heard about Lawn Darts.

> In the previous eight years, 6,100 people had been sent to the emergency room due to lawn darts in the U.S. Out of that total, 81% were 15 or younger, and half of them were 10 or younger. On the week the commission voted to ban the product, an 11-year-old girl in Tennessee was hit by a lawn dart and sent into a coma.

From wikipedia. It's a bad example.


It once was the canonical X in "we banned X why haven't we banned Y" (where Y is something like "assault rifles" or "peanuts") but perhaps only for people of a certain age. A more recent example: the Boeing 737 MAX was grounded after two accidents out of a few hundred thousand flights.


We do not need to wait to know that high-speed collisions pose a threat to other people in addition to those in the vehicle that causes it.


It's a sad "research study" in ergonomics. You can't really make a partial automation, it's a double penalty.


Beats me why hasnt Tesla auto pilot been shut down already. If the car is on Autopilot then people will get used to it and use that time to check on mail, call a friend or simply doze off. Some of these guys will end up getting killed and 'we told you so' simply doesnt cut it.

EDIT: The accident rate is a statistic thrown by Tesla and only serves the purpose its meant to serve. Do we have any statistics on accidents caused comparing the situation when car is on auto-pilot vs similar cars driven by similar drivers under similar conditions without autopilot ?


Some of these guys will end up getting killed and 'we told you so' simply doesnt cut it.

"I told you so" cuts it for tens of thousands of deaths caused by user error per year.

I'm not saying you're wrong about Tesla's autopilot, but it should be put in proper context. Tens of millions of lightly trained people are controlling thousand-pound steel projectiles at speeds where stopping distances are measured in hundreds of feet. Gruesome results are par for the course. I know we're all hoping that these deep-pocketed companies with fancy tech and big research budgets are going to fix it all for us, but they're not going to solve it on their first go.


same reason street cars aren't hard limited to 70 mph with low acceleration


I have a fair number of "shall not be infringed" friends (pro 2nd amendment, perhaps to a fault. I make no judgement calls here). An argument I've heard from them goes along the lines of: "yes, we could perhaps reduce things like school shootings and accidental gun deaths with stricter anti-gun legislation. But that comes at the expense of our American freedoms, and a couple of lives simply isn't worth it."

Which sounded pretty harsh to me at first, but when you think about it, thats effectively the same as the HN narrative: "yes things like the PATRIOT act could reduce terrorism, but it comes at the expense of our American freedoms, and a couple of lives simply isn't worth it."

And now for these cars: "yes removing Autopilot could reduce car crashes, but it comes at the expense of our American (?) freedoms (maybe freedom to drive/technically innovate/something else entirely), and a couple of lives simply isn't worth it."

One additional trend I haven't personally witnessed, but I hear about, is that when in some freak accident "a couple of lives" includes the life of someone who one of these gun guys, PATRIOT guys, or car guys knows, their opinion flips.


It's often presented as a loss of freedom when in reality it's a tradeoff in freedoms. One group might loose the freedom to carry a gun whilst another group gains the freedom to be safe at school.


That may be how one wants to present it, but it's disingenuous.

There is no freedom from danger or "freedom to be safe". Therefore one party has sacrificed something of value to placate a sentimental imagining that amounts to nothing to the other side. That isn't a trade. That's loss. No agency is gained by that arrangement. It's the equivalent of legislative/rhetorical marketing language. Sounds nice, but ultimately empty.

Safety is a function of the inhabitants of an environment's ability to mitigate threats. If the environment is full of disarmed people, then the person who brings the gun anyway faces nil resistance.

No one is safe. No one has agency to counter the threat, and there is nothing to show for the sacrifice of the right except the ill will of those who recognize the physical, realistic (read: pragmatic) implication of the act, and disagree in the first place. Which, ironically, decreases safety even further.

Tech like Autopilot is actually kind of the reverse of the gun debate. It's an active development that increases complexity creates new problems, and decreases agency since everything then gets bound by "Autopilot safety is a must." The argument is made that automating the task with a machine must make the thing safer, because every act of automating things up to now through a machine has made things safer, which is just an inductive fallacy if I ever heard one.

Sometimes the things you don't do mean as much if not more than the things you do. A point oft overlooked, and lamented by the "Do as I say not as I do" parent.

Anyway. That was a fun tangent.


There is no freedom from danger or "freedom to be safe". Therefore one party has sacrificed something of value to placate a sentimental imagining that amounts to nothing to the other side.

The freedom to walk down the street without risk of being shot is not sentimental. Presenting it as so is disingenuous.

Safety is a function of the inhabitants of an environment's ability to mitigate threats.

The evidence shows that isn't working.


> > Safety is a function of the inhabitants of an environment's ability to mitigate threats.

> The evidence shows that isn't working.

No, the evidence actually shows that that does work - gun free zones are home to something like 99% of all mass shootings, and school shootings with a prepared police officer or teacher carrying a weapon tend not to make the news because they don't hit the level of 'mass shooting' in the first place.

And then there's the debate about whether forcing terrorists to choose a methodology other than guns is even a good idea in the first place - I'd rather have ten mass shootings killing ~4-6 people each than a single repeat of Oklahoma City.


With all due respect, I disagree. What the evidence lays bare is that there is a massive difference in what individual members/collective groups of society interpret as a threat, and thereby find worthy of attempts to mitigate. It also stands that until recently, violence had been on a downward trend.

On the collective front for example, the anti-arms crowd sees a citizen exercising a right as a threat. The pro-arms side sees the anti-arms's willingness to engage in rhetoric to dismantle a fundamental right as a threat to the overall underpinnings of civil society. The pro-arms is not innately threatened by the bearing of arms, or refusal to do so, but by enforced asymmetric eligibility to bear arms without a darn good reason. This is the basic blueprint for any major X/!X or an X/!(X&&Y) divide [I think I got the truth table right there].

On the individual front, things get even more diverse.

A gang member sees a rival gang member on his turf as a threat, but the "civie" right next to them sees neither as a threat until the violence starts, and they have no defense.

A harassed/abused/ostracized child generalized the infliction of suffering by members of their peer group to the rest of humanity, making everyone a threat.

An oppressed/marginalized population feels threatened by their oppressors.

A previously oppressed/marginalized populations feels threatened by their previous oppressors

That's what I mean by there is no positive "agency" or "essence" to the "freedom to be safe". The state of safety is accidental to the circumstance. As long as there is an agent willing to break any civil consensus in such a way as to cause harm, the best move is for everyone to possess the most effective means of causing harm, but to refrain from employing it. This protects and perpetuates civil society, without creating "soft target" situations where no resistance can be offered to a violator of the peace.

I respect your right to the view that there is some positive existence to the freedom to be safe. However, at most I'll acknowledge the legitimacy of a claim to the right to live a life in the pursuit of Safety to be semantically and grammatically capable of bearing significance and recognizable meaning; in parity with the recognition of the purpose of a government to secure for its' People the right to Life, Liberty. and the pursuit of Happiness as proscribed to in the United States.

Pursue, by all means, but finding/catching it is far from guaranteed, and depriving others of what brings them Safety will inevitably set the stage for violent conflict, even when done through civil means.

However, I digress, and fear this interesting diversion may have long overstayed it's welcome, and it certainly was not my intent to turn this thread so far off topic. I just had a moment of clarity enabling me to formulate an articulation of an observation that has hitherto resisted my attempts to elucidate.


It seems like wherever we trade away freedoms the thing we're supposed to be getting in return never materializes.

Benjamin Franklin has a particularly relevant quote on this subject.


Do you really know what Benjamin Franklin said, and what the context was?

https://www.lawfareblog.com/what-ben-franklin-really-said#.U...


Well, when it comes to guns you haven't traded away any freedoms, quite the opposite so far.


>you haven't traded away any freedoms, quite the opposite so far.

We did in '34, '68 and '86. And then there's all the states that are still waiting for SCOTUS to tell them what "shall not be infringed" means.

In my state I have to pay hundreds of dollars and rely on the discretion of the local police in order to bear any semi-modern arms and that's just the tip of the iceberg. If that's not infringement I don't know what is.


Schools are already incredibly safe though.


In my opinion - the freedom codified in the 2nd amendment to let citizens to overthrow an oppressive and unjust government.


You make a number of fair points, IMO, and people are often too quick to dismiss arguments like this.

Having said that, this case is a bit special, IMO. By your own logic, this comes down to a cold calculation of how many crashes (and deaths) are acceptable. To the regulating agencies, the # is likely relative to other vehicles regardless of AutoPilot. So if AutoPilot is not demonstrably less safe than the others, why regulate it out of existence?

So far, such evidence simply doesn't exist. And honestly, the Tesla effect magnifying every incident doesn't make it better. Tesla is far from the only vehicle with systems like this on the road.


Have you driven one? I find I am far more alert and less with less cognitive fatigue when it is on autopilot. People may misuse the technology, but they text and drive a car without it. (and have far more accidents). It is far far better. Try it.


I havent actually. But I drive a car with a gear(stick) and whenever I drive a car without one it feels that I can use the hand to hold some coffee or get some texting done.

I do get your point regarding cognitive fatigue though but I think we need a conclusive statistic rather than one thrown by Tesla.


I cannot relate to this at all.


lol, should we ban cruise control too bud?


Because the accident rate is still lower than it is with humans driving.


Objectively not true....Autopilot is only used in a small subset of driving situations that are less risky than overall driving. And yet in the last 8 months alone, Autopilot has caused 3 easily avoidable deaths.

Autopilot is getting worse not better with time, and Tesla's tendency to introduce regressions (as with the Bay Area crash) means any temporary improvements in driving AI are just that--temporary.


> And yet in the last 8 months alone, Autopilot has caused 3 easily avoidable deaths.

This number by itself is absolutely meaningless without more context.

In 2017, 37,133 people in the United States died in an automotive crash. It is expected that some number of Tesla owners will die in a crash, regardless of currently engaged safety features.

Tesla has produced around 600,000 vehicles in total, including over 100,000 Model 3 vehicles.

I'm not making any claim or dispute regarding the effectiveness of Autopilot, whether or not it's improving. I'm just saying, making that statement you made is meaningless at least and misleading at most.


I'm just saying, making that statement you made is meaningless at least and misleading at most.

I'm just pulling a page from Elon Musk and Tesla's marketing department.


You made several bold claims. Do you have any proof of any of them? The statistic you provided is useless without context. In the last 8 months there were ~24,000 automobile fatalities in the US. What's important is how Tesla Autopilot compares to that.

Also, do you have any proof that Autopilot is getting worse over time that doesn't rely on anecdotes?


I don't need to prove anything, since I'm not claiming that Teslas are safer than all other cars in all other conditions like Tesla and Elon Musk are. They need to prove their claims, and resorting to spurious comparisons to "miles driven" under conditions in which Autopilot was "working but not actually controlling the car" don't count.


>Autopilot is only used in a small subset of driving situations...

By a subset of the driving population who are less risky than the overall driving population.


Autopilot cannot cause deaths as autopilot cannot drive. 3 drivers who chose not to pay attention caused those deaths.

EDIT: (I'm assuming for the purposes of this statement that you are correct and those were easily avoidable by an attentive driver)


A passenger cannot cause deaths as a passenger cannot drive.

The Tesla autopilot controls the throttle, brake, and steering. That meets my definition of "driving."


I'm sorry, but everything that you said in relation to Tesla there is wrong. My Nissan with ProPilot can also control the throttle, brake, and steering, but that doesn't make me a mere passenger any more than cruise control does.

It routinely false positives on frontal collisions and occasionally does things that would be dangerous if I weren't driving.

The driver in a Tesla is not a mere passenger, regardless of whether or not they have enabled autopilot. The initial warnings on enabling autopilot as well as the attention nags on the steering wheel all make this perfectly clear to the driver.


Weird, given Tesla's claim that with Autopilot "the person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself." [1]

[1] https://www.latimes.com/business/la-fi-tesla-autopilot-claim...


The words that you quote are from a demonstration video from 3 years ago. The features demonstrated were not available to the public and still aren't.

Tesla's has never claimed that a current vehicle can drive itself without the driver being responsible for the vehicle at all times.


The video was posted directly to the Autopilot page on Tesla's website. The video is also only 2 years old, not 3.

> Tesla's has never claimed that a current vehicle can drive itself without the driver being responsible for the vehicle at all times.

Yes, they did. Hence why the NHTSA and other groups went after Tesla for their claims about Autopilot.


The problem is how the cause deaths are perceived. An imperfect (aka "randomly kills you") AI is far scarier than a statistic that people interpret as "x% of bad drivers". Most people think they are better than the average driver. Not to mention that a lot of deaths are due to bad other drivers.

It's why people freak out about terrorism more than they do about car crashes or drug addicition.


Even bad drivers are very, very unlikely to get in any crashes so long as they're not intoxicated or distracted. Most people will probably never get in a crash (that is their fault) in their life.


> when used properly by an attentive driver who is prepared to take control at all times, drivers supported by Autopilot are safer than those operating without assistance. For the past three quarters we have released quarterly safety data directly from our vehicles which demonstrates that.”

Tesla attached a lot more caveats than you did in order to assert superior safety.


There is no reliable citation that you can give to prove that.

Emphasis on reliable, as Tesla's own stats are highly misleading -- they basically compare highway data, because Autopilot is mostly used on highways, to all other conditions in which accidents happen.

I'm sure that if Autopilot was active all the time, the rate of accidents would be far higher, and Tesla's numbers wouldn't look so positive.


_Citation Needed_ - I agree we'll get there eventually, but is it currently lower?


I think it is likely that certain aspects of the system, such as emergency braking, improve safety, but it may be that the more advanced features (those that actually allow the driver to get away with not paying attention, most of the time) do not. If this is the case -- and I am not saying it is, we need more data -- then the effectiveness of the former should not be used to cover up shortcomings in the latter.

If, as Tesla insists, these autopilot crashes are all due to the driver not paying attention, then it seems very plausible that a better attention-detecting system (one that tracked the driver's gaze, for example) would improve safety. Of course, if the system required drivers to do what Tesla says they should be doing, it would not be so appealing.

And if, as Musk is saying, drivers are confused or mistaken about the capabilities of the vehicle, then the first response should be to stop making ambivalent statements about them.


but that doesn't support the scary narrative!


I don't think engineers should be using logic like this.

Say you stumble on a lost civilization in the woods, and they have yet to discover any form of shelter. Any time it rains or snows they all sit on the ground and half of them die of pneumonia. What do we, as engineers, do to help them? Tie some sticks together and make a lean-to? That would certainly reduce the death rate. But I don't think it would be the ethical thing to do. The ethical thing would be to help them build proper buildings out of the best available materials, install an HVAC system, fire alarms, etc.

In other words, engineering should seek to maximize benefits for users, and there are cases where choosing not to maximize benefits could be seen as unethical, even if you do provide some benefit.

In the case of Tesla, the ethical thing to do would be to include a lidar which can prevent the car from smashing into large objects at full speed.

Just because the accident rate is lower does not mean this death or all the others are acceptable. Tesla had a choice of whether to use best engineering practices, and they chose not do.

The reason they are doing this is money. They couldn't sell cars with an expensive lidar on it. And they shot themselves in the foot by charging people thousands of dollars for future "full self driving" capability. Now if they don't continue forward with this plan, with no lidar, it will have serious financial consequences for them.

Note that other self-driving companies have found a way to make lidar financially viable for them. It's not impossible.

Tesla is killing people for money.


That all seems like a big stretch based upon some big assumptions to me. It is far from conclusive that LIDAR alone would make the difference in cases like this, due to the way sensors have to be fused together to handle these kinds of scenarios.

There are fair criticisms around Tesla's marketing and implementation, but I don't think "they should already have lidar on there" is really one of them.


Giving a nomadic tribe a building with HVAC and fire alarms would be unethical. If you then require them to live in those buildings you would meet the standard of cultural assimilation and destruction, as Canada did to the Inuit. As an engineer I find this reprehensible and arrogant.


So you want to stifle technological process because some people can't be bothered to be responsible? Darwinism at work honestly.


Here's a recap of the pertinent NTSB's conclusions from the last time this happened. How many will apply again this time?

“There was sufficient sight distance to afford time for either the truck driver or the car driver to have acted to prevent the crash.”

“The Tesla’s automated vehicle control system was not designed to, and did not, identify the truck crossing the car’s path or recognize the impending crash; consequently, the Autopilot system did not reduce the car’s velocity, the forward collision warning system did not provide an alert, and the automatic emergency braking did not activate.”

“If automated vehicle control systems do not automatically restrict their own operation to those conditions for which they were designed and are appropriate, the risk of driver misuse remains.”

“The Tesla driver was not attentive to the driving task, but investigators could not determine from the available evidence the reason for his inattention.”

https://www.ntsb.gov/investigations/AccidentReports/Reports/...


Doesn't appear to differ from previous Tesla autopilot accidents where the vehicle couldn't detect a tractor trailer crossing the path of travel with cameras or front facing radar. Interesting that Autopilot had only been activated for 10 seconds prior to the incident occurring.

I am not a domain expert. Front facing radar and cameras alone is clearly inadequate for forward facing object detection [1] (unless we're going to mandate radar detectable skirts and images superimposed on the sides of tractor trailers for vehicle detection). Would solid state LIDAR (essentially laser ranging I suppose, as you're not building a 360 degree point cloud) solve this particular edge case? Some automakers are including laser headlights in their vehicles [2] (although I don't believe this is yet approved for the US market); would it not make sense to convert the headlights into front laser ranging sensing systems along with their illumination function?

[1] https://www.caranddriver.com/features/a24511826/safety-featu...

[2] https://www.osram.com/am/specials/trends-in-automotive-light...


I also found the 10 second engagement time odd. On the one hand, you might not thing it's enough time to become seriously distracted. (eg, you wouldn't be "comfortable" with the semi-autonomous operation yet).

On the other hand, the engagement of autopilot might have been caused by a desire to explicitly NOT pay attention, however briefly. think: "ok, now let me read this text from the person I'm on my way to meet."

I have found myself occasionally using autopilot to augment my driving when I'm doing something that inherently decreases focus on the road, however briefly; eg, taking my sunglasses out of the glovebox. I still drive the same way I normally would in any other car when reaching for something in the glovebox, so it's easy to think "any enhanced safety is better than none".

On the other hand, it could lead to a false sense of security where you pay 10% less attention than doing the same task in a non-augmented car. Or you get comfortable doing such things, so you do them more frequently than truly necessary. (for example, you might just shield your eyes from the sun with your hand rather than grabbing sunglasses in a non-augmented car).


Tesla already has RADAR. Why would they need LIDAR?

The issue isn't with the sensors, but rather the system reacting to sensor data.


Radar is not sufficient for mitigating this scenario. Too low resolution.


You can buy much, much nicer radar than Tesla specs for their vehicles.

The stuff the DoD buys (or was buying a decade ago) is good enough to tell two basically identical tractor trailers apart based on minor features, dents/dings, etc.


Then why do automakers who support AEB not spec this radar? The Car and Driver link in my GP post above this specifically demonstrates that Tesla is not immune to failures to detect obstacles in the vehicle's path; it happens across many automakers (Subaru, Toyota, etc).


Probably because it hasn't come down in cost yet and/or there's physical limitations involving packaging the receiver.

As the long slow march of progress continues it may come down in cost enough to be viable on vehicles.


Can you share some more links to the radar technology you mention with high resolution and discrimination functionality? I am interested in performing more research on this topic. I'm familiar with several phased array radar platforms, so I might be aware of this specific implementation, but perhaps not!


https://www.baesystems.com/en-us/product/vehicle-and-dismoun...

As I understand it it's the algorithms that analyze the radar data that are the secret (literally) sauce. I didn't work on any of that. I just plumbed various data from A to B. I don't know much other than it was really damned good for being "legacy" at the time I was working there.


Were the tractors moving at different speeds in this test?


The test data was recorded during an average day over a medium sized city in the DC ares.

So yes.


I don't think we can conclude that cameras are inadequate for forward object detection. Only that the processing systems used so far were. This can be a matter of training of the neural networks and the amount of processing power used in the computing units. Radar is limited, as long as trailers in the US are allowed to have such high gaps to the ground. A lower bar on the sides would both help the collision detection systems as well as reduce the crash itself.


>A lower bar on the sides would both help the collision detection systems as well as reduce the crash itself.

The Neural Network should be capable to recognize a truck from any angle and even if parts are obstructed. I hope we will get the details public and see if it actually tagged the obstacle but ignored it or it detect it as something else like maybe a side road advertising panel


> "Radar is limited, as long as trailers in the US are allowed to have such high gaps to the ground."

One of these crashes involved a firetruck, which unlike a semi-trailer, does not have a large gap underneath it.

Radars are limited period. The radars used by Tesla lack the angular resolution to distinguish between a stationary object next to the road (e.g. a building or parked car) with a stationary object parked right in the middle of the road.


One of these crashes involved a firetruck, which unlike a semi-trailer, does not have a large gap underneath it.

That fire truck wasn't crossing the road, it was at the road side, which creates a different situation. The problem with high trailers is, that they very much look like a bridge to a radar. In any case, a proper side bareer can prevent deadly accidents.


If the radar cannot distinguish between a bridge with five meters of clearance and a "bridge"(trailer) with a single meter of clearance, then the radar simply lacks the angular resolution to be effective. It's the same exact issue as it being unable to distinguish objects on the side of the road and objects in the middle of the road.

It's insufficient angular resolution for the application, no matter which way you slice it.


It doesn't look that way, seems to have been in a lane, but any report on whether the autopilot was in fact engaged?

https://twitter.com/CC_Firefighters/status/95552999131956019...


The radar Tesla spec'd is quite limited.

The radar that companies people on HN don't think highly of sell to the government is "scary good" and that's just the decade+ old tech that I'm familiar with.

I'm sure there's reasons Tesla doesn't/can't put that in cars though.


It's nice and all that the military has such cool radars, but how much do those systems cost? How large are they? What sort of power requirements do they have? What sort of duty cycles do they have? And can they legally be exported?

Elon caving and adopting Lidar is probably more likely.


But Elon told me I will be able to rent my Tesla out as an autonomous cab by the end of this year or so.


They could probably solve the problem by mounting the radar on a trunion so it can sweep up and down (over say ~30 degrees).


It seems like this is a case where the autopilot thought that the truck was a stationary object. Based on my reading of the press release, it seemed like the truck was turning across the car's path, which should look like a stationary "bridge" to the model and subsequently ignored.

This seems to be a hard problem where you don't want the model to false positive on a stationary object on the side of the road and slam on the brakes.


> Based on my reading of the press release, it seemed like the truck was turning across the car's path, which should look like a stationary "bridge" to the model and subsequently ignored.

So if I understand you properly, an adversarial attack against Tesla autopilots would be to suspend a ladder across the road at windshield height?

"Vulnerable to being clotheslined" seems like a bit of an oversight.


How many human drivers would be able to avoid that same thing though? Most of the reason we can safely move cars is because most people follow the rules and aren't murderers. There are plenty of "attacks" you can do against human drivers too.


I agree with your general message but basically no human drivers are going to plow into the side of a a semi truck or a partial lane obstruction with no attempt to afford it.


Really? So no one ever is going to take a quick glance at their phone, not see the truck start to pull out, and do the exact same thing this Model 3 did? No one? I can say with 100% certainty that this exact same thing has happened at least once in history to a normal driver who simply got distracted.


Who cares about adversarial attacks on vehicles? There are about a million different ways to attack any vehicle if that is your objective. You could also use a spike belt, a firearm, a projectile, or just ram it with a vehicle that has more mass.


I am beginning to get frustrated with news surrounding both Tesla autopilot crashes, and self driving car crashes. I understand that we are seeing lots of news about it because it is new, and people are scared that self driving cars are going to kill people. But can you imagine if an article trended every time someone got into an accident while using cruise control?


This is about surrender responsibility to a piece of software and trust it 100% with your life and/or your family members lives or take responsibility and engage in driving the car. The news, and legitimately so, are covering the unusual - surrender your abilities, judgments and consequently and directly your life, to faulty saftware and hardware.


I have said this to my friend, so far all autopilot accidents resulted in Tesla drivers death, but there is a person out there, hopefully not a child, whose car Tesla will ram on autopilot and their death will put an end to this stupid advertising/naming with fine print excuse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: