It seems to me that self-driving cars can't be just as safe as the status quo, but have to be far far better. It's an unreasonable need, but human nature.
I'll bet that to really make it work well, you also need to redesign roads to suit the new cars.
In any case, I'll believe it when I see widespread/universal adoption of self-driving in constrained environments (mining vehicles, warehouses, storage yards). Step two will be things like garbage trucks (slow, expensive, otherwise automated, phone home when it gets in trouble).
Humans have a tolerance for human-style accidents, but not robotic-style accidents. A dipshit that pulls in front of you to t-bone is a human-style accident that is acceptable. A tesla that sees a river and thinks it's a road is a robotic-style accident. It's going to take centuries for us to become acclimated to how machines mess things up.
One can learn to drive defensively, and be able to spot the said dipshit in advance, decreasing the likelihood of collision.
Yes, not in all cases, but there's a bell curve. As humans, we can read other humans.
Robots aren't inherently bad either. I love my Roomba precisely because if it fails, it fails predictably. I can and do make my apartment Roomba-friendly.
But the self-driving deep learning-driven cars don't have this feature. They fail unpredictabily, as in your example. We don't have an understanding of what really drives the decision-making there. And we keep discovering failure modes as they manifest.
One day, the car might think the river is a road. Another, it will not recognize a road block as such. But only if it's a Tuesday. Or something. We really don't know.
So I don't think we'll get acclimated to that. Unpredictable failure is a reasonable thing to be afraid of.
I'd argue it is unreasonable given that (afaik) no one is marketing a self-driving car that doesn't also require a driver be attentive at all times.
You can learn to drive a self-driving car defensively, in the same way you learn to drive a normal car defensively. Pay attention to your surroundings, blind spots, and other cars.
And Uber's and Tesla's accidents have shown us that you can't rely on the human in the car to catch it before it does something stupid. After all, teaching someone how to drive and watching them attentively is difficult work.
It's especially bad because us humans supervising the cars are going to be worried about preventing human-style accidents. Maybe you're worried about it following too close or not noticing the red light - so you don't notice the warning signs in time when it veers into the concrete barrier.
Humans fail in unpredictable ways all the time. Look at the cases of people using vehicles as a weapon of terrorism. How are we supposed to predict that?
Yeah, we can go back and investigate the person's life and say "see, they had a history of browsing extremist websites, making bigoted comments, suicidal thoughts, etc." The problem is that this description fits a very large group of people, compared to the subgroup which actually carry out the attacks.
So while it is true that we may have difficulty (or it may even be impossible) auditing a self driving car after the fact, it should not be any more comforting to know that human failure can be explained after the fact, given that the human failure was not predicted beforehand.
You can get 99% of the way there with extremely detailed maps. Where the road is, where the lanes are, what the speed limit is, where to find stoplights and rail signals, where to be extra-cautious, how to navigate through a construction zone, all can be pre-programmed.
A Waymo is basically just following a virtual track on an extremely detailed LIDAR map of the area, obsessively watching for pedestrians and following the rules of the road the best it can. It will never think a river is a road.
Not to say that there aren't a million little things to be concerned about with this approach, or that there aren't major things to overcome (like heavy snow on the ground making your LIDAR map useless). But I think we'll get there, and in many places soon. There's nothing that says this tech has to exist everywhere out the gate.
Tesla's approach thinks white trailers are open sky. The river comparison is not unreasonable at all.
Almost all of the startups in the space are following the Tesla approach of just throwing machine learning at the problem which means almost all work in the space suffers from novel input producing unpredictable results. This is one of the things that has killed trained and evolved systems in the past and the fact that few of the companies in this space are even trying to manage it (either by building interpretable models or by using models and using ML to do parameter fitting) is a good indicator that the whole business is either a fraud or is built on the premise that with enough data or, for the un-cautious and unaware, enough simulation, the problem solves itself.
I think what we are actually seeing is that Waymo (where, in ten years, they might have a solution) and Tesla (which is mostly worst-in-class but as a company is the master of hype) drove hype around self-driving. Then Uber and Lyft latched on (because they needed a story to paper over their terrible economics) and pushed it even higher. The Otto thing and Cruise's acquisition made VCs pay attention.
So these companies are acquisition bait for the assumed-clueless big auto companies. They will not deliver self-driving cars. At best they will deliver next-generation enhancements to emergency braking, etc.
They can change the construction zones on any given day - adjust the lane changes, move the barriers a foot over, or add a new fence where there wasn't one yesterday.
You would think then a cooperative system between human and robot would be therefore be the best option. The human prevents the robot from driving into a river and the robot prevents the human from t-boning someone else. Except most people on HN seem to be completely against this type of cooperative system and would only support self driving tech when it gets to level 4 or even level 5 autonomy.
I'm not sure how such a system could possibly work; human attention doesn't work that way. If the robot drive 2 hours and 35 minutes just fine and one minute later decides to drive off the road, how is a human going to react to that? It's not really possible.
If a human has to pay attention all the time, then they might as well just be driving.
And one of the advantages of self driving cars will be freedom for the elderly and disabled who already have difficulty driving cars. This is the group, in my opinion, that will benefit the most from self driving cars.
>I'm not sure how such a system could possibly work; human attention doesn't work that way. If the robot drive 2 hours and 35 minutes just fine and one minute later decides to drive off the road, how is a human going to react to that? It's not really possible.
Right now the self driving tech doesn't do anything to communicate with the driver besides just a general warning. However that doesn't have to be the case. Maybe some type of confidence indicator should be added. A simple green, yellow, red warning system could help a driver know when a self driving car might not be as certain in its surroundings.
>If a human has to pay attention all the time, then they might as well just be driving.
I just don't get this mindset. There are different levels of "paying attention" that require different levels of mental energy. The discussion also isn't made with nearly any other driver assist technology. No one suggests that automatic transmissions or cruise control are worthless because the driver still has to pay attention.
>And one of the advantages of self driving cars will be freedom for the elderly and disabled who already have difficulty driving cars. This is the group, in my opinion, that will benefit the most from self driving cars.
I agree with that, but it doesn't mean that other drivers won't benefit as well.
> Right now the self driving tech doesn't do anything to communicate with the driver besides just a general warning.
I think this probably fails to understand how quickly it's making decisions, how quickly the situation could change, or how it could even judge it's own confidence. It could quite confidently drive you into a river; it's not likely to do that unless it's pretty sure it's a road.
As self-driving technology gets better, the robot is going to be much better and the failure situations much more rare.
> I just don't get this mindset. There are different levels of "paying attention" that require different levels of mental energy.
If one doesn't have to provide direction or speed, there is literally nothing to keep a human attentive enough to make a difference. You expect someone to passively look at a green/red/yellow indicator for hours at a time while not otherwise engaged with driving? And then when that indicator turns red you expect them to immediately be able to make a useful judgement? I think humans are more than likely to make that situation worse!
> The discussion also isn't made with nearly any other driver assist technology. No one suggests that automatic transmissions or cruise control are worthless because the driver still has to pay attention.
There is a huge difference. If you have cruise control or automatic transmission, the vehicle still doesn't go where it's supposed to without you paying attention. The driver still has to pay attention because they have to pay attention or they don't anywhere. With self driving cars, the driver is now a passenger and doesn't have to pay attention for the vehicle to work.
I haven't seen any evidence of a self driving car accident in which the car had no potential to warn the driver. All the examples I have seen are when a car misidentifies on object in its path as a non-threat. The problem is that breaking and/or swerving are very drastic actions. There is therefore a deservedly high barrier before a system is willing to take those actions. At the worst you could have two systems running in parallel and one is tuned to be much more aggressive in making those decisions and that information is communicated to the driver.
No self driving car that doesn't have at least level 4 autonomy allows the driver to not pay attention like you are suggesting. The benefit of a cooperative tech is that it takes mental load off the driver. Like I said previously, there are different levels of mental energy that a person can spend on a task. It isn't like the choices are either 100% or 0% of your attention.
Such systems work fine. The human does have to be constantly driving, but the computer intervenes when it detects the human doing something dangerous. Look at automated GCAS in aircraft.
Full self driving in the general case will probably not exist in our lifetimes. It's like chasing an imaginary pot of gold at the end of a rainbow. So the industry should aim for goals that are actually technically feasible and will deliver tangible safety improvements.
I disagree. The progress of camera and laser technology, in addition to AI, is improving at an incredible rate. The market for self-driving vehicles is huge -- any company that figures it out will make billions (maybe trillions).
Computers can have a perfect 360 degree view and track and categorize every object around them. Humans will be both physically and intellectually outmatched eventually. And humans just aren't that good at driving on average.
You're welcome to disagree, but so far there's no hard evidence to support your viewpoint. Just a lot of hype.
Past rates of progress are not indicative of the future. The easy problems have already been solved. Now progress is already slowing down.
In particular the notion that computer vision can reliably track and categorize every object is just laughable. The state of computer vision research is nowhere close to that capability. Errors are frequent, especially under adverse conditions.
In November 2017, Waymo announced that it had begun testing driverless cars without a safety driver in the driver position. In October 2018, Waymo announced that its test vehicles had traveled in automated mode for over 10,000,000 miles (16,000,000 km), increasing by about 1,000,000 miles (1,600,000 kilometres) per month.
In Arizona, Waymo has fully autonomous taxi service you can use right now. https://waymo.com/apply/
> Humans have a tolerance for human-style accidents, but not robotic-style accidents.
I think it's more because there's someone to yell at - someone to blame.
Think of it like a taxi driver with a passenger; people ride in these all the time, or uber, or lyft - and have no problems doing so. Even though they aren't in control, and they have no idea about the driver's driving capability. They assume the driver is an average to better driver, when that may not be the case at all. So what happens when an accident occurs?
Well - if the passenger is still alive, and believes it to be their driver's fault, they can yell at their driver. If they believe it's the other driver's fault, they can yell at them...and so forth.
But you put a robot in place of any driver - then who can you yell at when an accident occurs? Nobody. You can yell all you want, you can even blame the corporation that made the car/robot - but you are still yelling at a machine. It won't change anything. It won't feel bad. It probably won't even make you feel better, because it won't restore your agency in the situation.
That's my guess anyhow - pulled straight from my nether regions, with nothing to back it up. I wonder if there have been any studies on this? Are there any data on what happens in other "self-driving" incidents; like perhaps completely automated trains with no humans other than passengers on-board? Do such things even exist?
Or do companies already know this - and put a human "in charge" to take the blame and heat for when an accident happens (maybe they also provide an e-stop button for the human to press - whether it's actually connected to anything or not - but it's logged as to whether it was pressed, just to help "assign blame" later)...?
While I have no proof or anything to back this assertion or theory up, it seems like it would match human nature. When things go wrong, humans want to blame somebody - some actual, sentient being. I'm sure there's a psychological term for this need and the action; I don't know what that term is, but I'm almost certain it's been studied.
Essentially one of two things would have to be done to overcome this: Either the machine would have to become perfect at driving and avoiding accidents (even to the point of avoiding things of natural random consequence, such as rocks falling on the road, or something of that nature) - or it will have to become more sentient, and ideally emotive; that is a GAI with human traits.
The former is likely impossible - randomness will always get ya; even trains have what should be "avoidable" accidents.
The latter - well, if it's achievable; that is, if we can make such a GAI, etc - it will likely also suffer the same problems humans do, most notably that we are failable and we make mistakes.
...but at least then we can blame them on something, right?
We all accept risk in our lives, in most cases the qualitative characteristics of the risk are the more important one. I don't really have a good enough answer to your point, but I find it morally unsettling that many people would agree with this.
One example I can think of is Russian Roulette. I would be quite a different game if each player had to shoot the neighbour instead of himself. It would be literally the same expected outcome, but an entirely different game.
Each human I encounter everyday has a small chance of murdering me (let say one in a million) would I like a robot wandering around with one in a billion chance of murdering others?
We accept that we cannot endlessly manipulate and control other humans, that we cannot forcibly "fix" arseholes, we have no such limitations for robots.
We already have an example of strong automation-driven transports: aeroplanes. Honestly I see no reason to have lower standards for self-driving cars.
>When things go wrong, humans want to blame somebody - some actual, sentient being.
Exactly! When (not if) a fully autonomous self-driving car causes a terrible accident resulting in death, how are the loved ones going to feel when they've only got a company to blame? In such an emotional situation, the response needs to be better than Uber or whoever saying "we're very sorry our car did that, but hey, we're trying our best, and these things are going to happen". Are people really going to accept that? That's simply not going to fly.
>It seems to me that self-driving cars can't be just as safe as the status quo, but have to be far far better. It's an unreasonable need, but human nature.
What would be the point of self-driving cars which were no safer than the status quo? That's just removing freedom from drivers and adding more complexity to infrastructure for no added benefit.
It's also exactly the narrative that proponents of self-driving cars have been driving (pun intended), self-driving cars would eliminate all, if not nearly all, accidents and fatalities.
What they're talking about I think is the usual statement that self driving cars only need to be slightly better than the status quo (in deaths per mile driven) to make sense because then anyone moving from driving to using a self driving car decreases the number of deaths. From a raw statistics POV this makes sense, if a self driving car is even just a little bit better than the average driver on average anyone switching to an autocar will decrease the number of injuries.
What they're saying (and I agree) is that human nature means people won't be willing to get into a car that's 'just a little better than average.' There's a couple reasons I think that will be true: 1) people generally think they're better than they are at driving 2) moving from people to autopilot moves the responsibility from generic 'bad drivers' causing accidents to a single system. Going from a diffuse responsibility to a more concentrated liability on the part of the manufacturer will probably mean they'll be blamed much more for the failures of their cars.
I think insurance companies can affect this behavior. If self driving cars are safer, then the insurance companies could lower rates for those cars, creating some economic incentive to override the irrational human brain.
The main attraction is that automation will make car ownership non-sensible. Even right now, taking a cab every single time is economically at break-even or better for many urban dwellers. If you take out the driver, then cabs can be much cheaper.
Second, what you call "freedom", is called a "chore" by a good chunk of the population. I can't wait to not touch another steering wheel ever.
To each his own, but I don't want to have to get permission from Google, Tesla or anyone else before driving somewhere, then be escorted under surveillance like a prisoner in the back of a cop car with every possible metric being mined and correlated for the benefit of corporations, insurance companies and the state, leaving my safety to the whim of algorithms and systems likely built to the cheapest standard possible.
I'll gladly deal with the "chore" of being able to turn my own key and drive my own car as an alternative.
Yeah, as someone from a rural background, it always strikes me as far-fetched that people are going to give up ownership of vehicles because of driverless tech. People routinely spend 10x what they need to on a car, living well outside their means for essentially status (buying a new BMW instead of a used Honda for example). Moreover, there is more to owning your own car than privacy. It’s really damned convenient to have your car loaded up with your stuff. Your snow gear, your bike gear, your tools, your emergency electronic tools and first aid. I like to keep a folding chair in my car, it’s surprisingly convenient.
I don’t have a ton of experience living in cold climates but I did spur-of-the-moment cross country road trip in February a few years ago. We left from the Bay Area headed to Indiana, and when we stopped outside of Chicago for gas and I realized how naive our plans were. I had to get back in the car while gas was pumping it was so cold. If we had car problems or an accident dying of exposure could have been a real thing. We didn’t have cold-weather gear with us, I was wearing jeans and a hoodie, and I really felt like an idiot for it. Such fears can be easily solved by having blankets and food and chemical heat sources in the car, which I’m pretty sure is normal for people in those climates. You can’t really have gear with you like that if the driverless car goes and gets another fare once you reach your destination.
I wonder at what point they'll just solve the safety issue by removing the windshield and adding shit loads of impact absorption and a few steel plates that isn't possible on vehicles where the driver needs to be able to see outside.
The limiting factor right now is mostly that nobody wants to wear 4/5/6pt harnesses or have a properly fitting bucket seat (these things are a big pain in the butt for a daily driver) so we have to keep shoving explosively deployed cushions into places so humans don't bounce off of harder things.
The fact that we can't replace the windshield with something else will likely never be a practical limitation for the foreseeable future.
> What would be the point of self-driving cars which were no safer than the status quo? That's just removing freedom from drivers and adding more complexity to infrastructure for no added benefit.
What?? Safety is a great side dish, not the main course. The point is to free up hundreds of millions of man-hours spent daily focusing on roads.
Why are you assuming no one would own self-driving cars? It's likely they'll simply be sold the way any other kind of car is nowadays, and be far more expensive than typical cars.
Also, commuters will still have the same commutes whether they or their car is doing the driving. The deployment of self-driving cars is not going to magically transform a multi-hour commute into ten minutes.
Why would I want to own a "far more expensive" self driving car? I would love to get rid of my car and rely on fully autonomous Uber/Lyft. Even factoring in car rental for travel or moving or whatever I think it would be far cheaper that way. My car is a never ending money pit with insurance, gas, maintenance, registration, tires, it never stops.
You're going to pay for all those things even if it's indirectly through someone else who owns the vehicle. Most vehicle costs are related to mileage, not time--especially outside of areas where rust is a major factor.
I'm sure there will be a difference at the margins in cities where many people don't need a car on a daily basis. I assume that's already the case with Uber/Lyft today. And self-driving when it eventually arrives in those areas presumably will decrease costs some. But for someone who lives outside of dense city centers it seems likely they'll continue to want to own a car that is a model they like, equipped to their specifications, and storing all the various things they keep in a car.
>Why would I want to own a "far more expensive" self driving car?
It doesn't matter what you want, it matters what car companies do. They're not going to spend millions of dollars on R&D and marketing hype for self-driving cars only to make far less money on them than they do now on conventional cars.
>My car is a never ending money pit with insurance, gas, maintenance, registration, tires, it never stops.
Yes. That's the entire point, cars are micro-economies supporting multiple businesses and profit centers. And self driving cars will be the same, for the same reasons.
I'm not sure exactly what you're saying. Are you talking about some kind of monopoly or trust that prevents Uber/Lyft et al from getting access to self driving cars? That seems unlikely to me. As far as the various industries centered around car ownership I don't see why they'd go anywhere. Self driving cars will still need maintenance, oil, tires, etc. The only thing changing here is who owns the car. I don't want that to be me. As soon as it is economical for me to pay for transportation as a service and get rid of my money pit that is parked outside my apartment 24/7 I will do so.
You're right I didn't mention it, but that fact is actually crucial to my point. I'm lucky enough to live in a place where walking, taking the bus, riding a bike (the one I own or a bike share), or taking a Lyft get me almost everything I want at a very reasonable cost. And as new transportation services emerge I need my car less and less. So I'm hoping that in the near future there is some service that can take over the few things I do need a car for. Something like Car2Go or ZipCar is actually almost there.
It's in all likelihood already safer than that status quo by an order of magnitude. Not that they are that good but the status quo is really that bad.
But it's the devil we know, It's fine that we average 3,287 fatalities a day[0].
We also have more empty homes than homeless, more food than starving people, crippling homelessness and 1:4 malnourished children in the US alone. I expect nothing from humanity yet I am still disappointed.
> self-driving cars can't be just as safe as the status quo, but have to be far far better. It's an unreasonable need, but human nature
Human nature doesn't end up meaning much as soon as insurers start seeing the statistical difference.
As the difference gets more pronounced with the lane assist and auto stopping features, rates on any new cars without these features will go up relatively.
Insurers don't care all that much if you rear end someone or get in some other "normal" accident. That happens enough that they have figured out how to cover the cost at scale. They don't take that much of a hit anyway because of your increased premium over time. They'd obviously prefer you do it less but so long as their slice of the population doesn't to it any more than their competitor's slice it's not a big deal because they come out even. That's why they offer $10 and $20 discounts for safety and security features. It's not a big deal to the insurer since they know exactly what the "value" of each of those features is at scale but of course they want the cheaper to ensure customers so they'll toss them a small perk, a fraction of the money that feature statistically saves them.
What insurers are really scared of is when your 16yo kid and four of his/her friends drunkenly go off a cliff on their way to prom. That's hard to assess the risk of and it costs them big bucks. These kinds of oddball high dollar edge case accidents are exactly the kind that self driving cars seem to be getting into. A Tesla killing someone by driving into a barrier or semi truck sends chills down the insurer's spine because it's only a little bit of bad luck (from the insurer's point of view) away from a not so dead customer that's maxes out their collision and medical coverage.
If I were an insurer I would not be offering any automation discounts until we have more data.
Well, yes, obviously. The commenter's point is that there will be a tipping point where insurance agencies look at those "crazy prom death" accident stats and see they happen 2x or 10x more often with human drivers. At that point, the cost of insuring a human driver is going to be 2x or 10x the cost of insuring an AI driver.
Not at all. Computers are supposed to be orders of magnitudes better than humans at certain tasks. If you're going to put computers in charge of our lives on the road at high speeds, why wouldn't we expect them to also be orders of magnitudes better at driving?
Also, if I'm a lot better than the "average driver" (where I assume such statistics also include people texting and driving or driving drunk or having not slept for 40h, etc), then I certainly wouldn't be satisfied with an automatic system that is only a little better than the average driver.
And I didn't even get into the whole thing about car companies, including (especially?) Tesla, which compare highway Autopilot accidents to all road conditions accidents, to show how their Autopilot is "better," or the thing about such systems making errors that humans would never make, just like language translation systems can make totally different translation errors compared to humans, even if they "score the same."
To keep things short, yes, I think it's completely reasonable that self-driving systems should be far FAR better than your "average human driver." I don't even know why this is controversial.
Also, I agree with the Ford CEO that all carmakers, even Tesla, overestimated self-driving capability over making an electric vehicle. The ability to make a great EV will matter far more for these companies' survivability in the next 10 years than making a good self-driving car. But it's almost like most of them focused more and invested more in the self-driving capability than switching to EVs. Big BIG mistake!
Even Tesla, the only major pure EV company, mistakenly almost "bet the farm" on self-driving to the detriment of making a high-value EV. How? Well, by putting very expensive "full self-driving hardware" (that turns out is not actually full self-driving) into every Model 3 car out there. Terrible decision. They would've done a lot better if every Model 3 was $5,000-$10,000 cheaper by default without all of that gimmick.
> Computers are supposed to be orders of magnitudes better than humans at certain tasks.
Who claimed that driving is among those "certain tasks"? No, in the near future computers will at best be marginally better than human drivers. Computers are better at decision latency. But that merely compensates for some of their current shortcomings.
And that will be sufficient to save thousands of lives every year. Do you not like preventing deaths?
Delaying "marginally better" for "orders of magnitude better" means delaying the deployment of life-saving measures.
> It seems to me that self-driving cars can't be just as safe as the status quo, but have to be far far better. It's an unreasonable need, but human nature.
They need to be safer than me, not the average driver. I suspect the lower 50% of drivers cause > 50% of the collisions.
> They need to be safer than me, not the average driver
But I'd they are safer than the average driver, you gain from their widespread adoption even if they aren't safer than you. Because you don't have the choice of replacing other drivers with you...
Defensive driving allows me to counter most bad drivers.
If we're talking about removing dangerous drivers from the road, I'm all for it. It should be next to impossible to get a license for most people, if what we really care about is safety.
> It seems to me that self-driving cars can't be just as safe as the status quo, but have to be far far better.
They probably, for acceptance, have to be consistently as safe in like circumstances, they can't just be as safe on average. Which, if there are circumstances in which they are much safer, also means they end up needing to be much safer on balance, but it's not really the on-balance comparison that drives that.
I am perfectly happy to accept autonomous cars to kill even a little more people than human drivers do in exchange for the utility that they offer. You can use your time in the car to do other things. That's actual life-hours saved.
And the promise is that they're only going to get more reliable with time.
Conversely, keeping back self-driving cars just because they upset your sensibilities who specifically should die, i.e. random group A instead of random group B even though A is larger, means you're effectively advocating that more people should die. That's grossly negligent.
Interesting.. I would actually think garbage trucks would be one of the last vehicles that are automated. They have to do some insane navigating in small spaces. It seems like the absolute hardest thing to automate that I can think of.
Do they? Most I see just cruise down pretty open residential streets grabbing convenient cans. Like most things it'll probably be a mix, most service would be covered by automated trucks with the more troublesome routes still driven by people until the kinks get worked out.
I would imagine actually picking up the cans is the hardest part. they often look different from each other, people place them haphazardly, and you might even have to distinguish trash from recycling.
I'll bet that to really make it work well, you also need to redesign roads to suit the new cars.
In any case, I'll believe it when I see widespread/universal adoption of self-driving in constrained environments (mining vehicles, warehouses, storage yards). Step two will be things like garbage trucks (slow, expensive, otherwise automated, phone home when it gets in trouble).