The problem is that there are literally uncountably many situations that a human with "general intelligence" will understand and react to accordingly. Sometimes smoothly, sometimes less so. But a non-conscious automatic entity needs to have the required behavior programmed in explicitly.
So yes, you might argue that for this particular situation, you "just" need to put in the proper programming and AI/ML training and then "maybe the car will notice more often than a human" as long as the situation is within very specific bounds. At least now that somebody made an article about it.
But it does not change that the autonomous machine does not understand the complex world it is driving in in the slightest, and that, for example, swerving into an unmanned fruit stand without being able to brake is much better than swerving into an unmanned gas pump.
> for example, swerving into an unmanned fruit stand without being able to brake is much better than swerving into an unmanned gas pump.
This is just ridiculous.
If you believe that humans are making rational decisions in split seconds then you are delusional. A swerving, breakless human driven car will hit whatever happens to be where physics takes it. The scared monkey descendant holding on to the controls will as likely do the wrong thing as doing the right thing. Maybe a fighter jet pilot or a rally driver can do better but i wouldn’t count on it.
And besides how did that AV end up swerving with no brake? This is the reason why autonomous vehicles are set up with redundant brake actuation. If I would have doubts about our ability to stop the car I would much sooner implement a third independent brake system than to try to solve whatever philosophical runaway trolley problem you are concoting here.
Runaway trucks are a very real problem, we even have emergency escape ramps in places where they are most common. There is usually no split second decision here: You have a comparable eternity to decide where you want your non-braking truck to end up.
However you construe my particular example to be "ridiculous" under the additional constraints that you imposed on it yourself, you are looking at an instance where an autonomous vehicle failed to make a reasonable decision that a human would right in this article.
In fact, a human, the garbage truck driver, had to react. They were not a "jet pilot" or "rally driver", and yet they were perfectly able to resolve the situation that the AV had gotten themselves into.
> you are looking at an instance where an autonomous vehicle failed to make a reasonable decision that a human would right in this article.
And remote assistance/teleoperation was reacting at the same time, for the vehicle.
Humans will screw up similar cases: end up in the emergency vehicle's way and panic and do the wrong thing. In fact, I saw a fire truck impeded Wednesday because of some judgment mistakes by a human driver that an AV would be unlikely to make.
So, frequency of "doing the wrong thing", and severity (here, probably measured in seconds) are completely reasonable questions to ask-- even if the circumstances where each tends to mess up are different. I don't think it's reasonable to ask that the autonomous vehicle be superior than a typical human in every axis of performance, just overall superior.
Yes. And autonomous trucks are an engineering problem, not an abstract philosophical question. You can ask the question: “how is the autonomous truck going to know if it should slam into the petrol station or into the fruit stand?” And there is no good answer to that. Or you can ask: “How do we engineer autonomous trucks so the probability of a runaway incident is lower than epsilon?” And then suddenly it turns out this is a solveable problem with our existing tools. (With redundant brakes, and with built-in brake health checks.)
> under the additional constraints that you imposed on it yourself
I assume you mean comparing to what a competent human would do? It is implicit in the whole discussion. Human drivers are the current best practice. You are asking about the “fruit stand vs petrol station” question presumably because a human would know to choose the fruit stand.
Nobody is asking this alternate question, because they would immediately feel it is ridiculous: A young boy is crossing the road in front of an AV. In 30 years he will become a politician, will instigate a violent sectarian war which will result in the death of millions of innocents. Should the AV run him over thus preventing all that suffering?
Just to state the obvious: no, the AV should not run the boy over. But why is nobody asking this question? Because it is obviously silly. We as a human can’t look at a young boy and know them as a future mass murderer, therefore we don’t expect this from an autonomous car either.
> you are looking at an instance where an autonomous vehicle failed to make a reasonable decision that a human would right in this article.
Oh yes. And it is a fascinating one. I was reacting to your “fruit stand vs petrol pump” hypothetical not to the article directly.
In the real world someone has to program the self driving system to make the decision about how to react. That is, there is a software team somewhere that is going to have to decide what behaviour to program into the system for the trolley problem. So, your statement that it is not an abstract philosophical question is patently false. Obligatory link to The Good Place making the trolley problem real here: https://www.youtube.com/watch?v=DtRhrfhP5b4
It is hopelessly naive to think that these problems can simply be engineered away. In the real world failures happen in redundant systems. Air brakes are supposed to "fail safe", but in reality a host of factors contribute to accidents: how well a truck or trailer's brakes are maintained, engine state, speed, loading, temperature and grade all combine to make them fail. Trains have multiple braking systems, yet sometimes all 3 fail and a spectacular accident occurs.
In addition to all the traditional mechanical issues, self driving vehicles have tonnes of software failure modes that traditional cars do not. More importantly, those software issues are not well understood at this point in time.
If you want to better understand why software can't be trusted to Do The Right Thing, go back and read investigations analyzing failures of systems that have come before. The Therac-25 is good over here: https://en.wikipedia.org/wiki/Therac-25
No system a human can build can be completely intrinsically safe. Mistakes by designers occur. Safety is a process that takes time and effort, and it will take decades for self driving cars to work out all the bugs.
> If you believe that humans are making rational decisions in split seconds then you are delusional. A swerving, breakless human driven car will hit whatever happens to be where physics takes it.
I was in a car accident a few years ago where someone left a stop sign without realizing that there was traffic (me) in a lane they couldn't see. I wouldn't describe having felt like time slowed down, but that isn't an absurd way to describe it. I had a weird sense of clarity for the few moments before impact. I was able to slam the brakes, but I was way too close to avoid hitting them. I had a distinct feeling of "the front of the car is heavier and there's a person there". I swerved left instead of right to avoid hitting the front and slammed into the rear passenger side of their car. This spun their car completely around and totaled both cars, but both of us were able to walk away without any injuries.
Some really fascinating studies have shown that your perception of time doesn't actually slow down in moments of heightened intensity, however the detail of the memory (when reflected on later) is higher than for a non-traumatic experience.
An interesting summary of how they studied this if you're curious:
> But it does not change that the autonomous machine does not understand the complex world it is driving in in the slightest
It only needs to understand when to ask for help:
the driverless car had correctly yielded to the oncoming fire truck in the opposing lane and contacted the company’s remote assistance workers, who are able to operate vehicles in trouble from afar. According to Cruise, which collects camera and sensor data from its testing vehicles, the fire truck was able to move forward approximately 25 seconds after it first encountered the autonomous vehicle
I don’t understand why people think that driverless cars need to deal with every one in a million scenario, it makes no sense.
This phrase is doing a lot of work here. It's one of my least favorite phrases (in a close race with "just do this") and is often associated with unrealistic feature requests.
I'm sure there is a disagreement between SFFD and Cruise as to exactly what happened, but the article implies that the Cruise vehicle isn't the one that moved to fix the problem.
> The fire truck only passed the blockage when the garbage truck driver ran from their work to move their vehicle.
Even if the Cruise vehicle was able to call for help, the car not only needs to call for help, but also wait for a response (at 4am), and give a remote human enough information to control a car remotely in a safe manner. None of these things are easy... not impossible, but not "only needs" easy.
> driverless cars need to deal with every one in a million scenario
Of course driverless cars need to deal with one in a million scenarios. Human drivers deal with one in a million scenarios every day. Nothing is ever the same when driving, so there are always subtle changes. But even if there is an unusual situation, there must be some kind of response. Even if that response is to move to the side of the road and put on hazard lights to indicate that it doesn't know what to do (which it may have done)... that would have been a better response than to do nothing and sit to wait for a human. There should be a default "unknown input" failure mode. The disagreement here is that SFFD didn't like how the Cruise vehicle failed. Maybe there is a better approach.
We are expecting these vehicles to move us around 24/7. That's a lot of trips. At this rate, one in a million scenarios will happen every day. That's the problem with large numbers -- even rare events are to be expected when N is high enough.
> I don’t understand why people think that driverless cars need to deal with every one in a million scenario, it makes no sense.
That a particular situation is rare, does not mean that you won't encounter multiple rare situations in a given time frame.
(By the way, the article stated in the beginning that the garbage truck had to move? Either way, it required manual intervention and a human's situational awareness. How does the car ask for the proper help at freeway speeds within seconds--not even split seconds?)
"But it does not change that the autonomous machine does not understand the complex world it is driving in in the slightest, and that, for example, swerving into an unmanned fruit stand without being able to brake is much better than swerving into an unmanned gas pump."
It seems it would have a way of prioritizing such things. That doesn't seem particularly complicated, to be honest... weighted decision making is certainly within its capacity. E.g. shopping cart has fewer "avoid hitting" points than stroller.
It's not like every scenario has to be explicitly programmed in, nor does the program need to run some analysis on a detailed backstory to justify that a baby is more valuable than groceries. In effect, somebody -- probably not a programmer either -- just needs to enter some numbers into a spreadsheet.
(yes there is complex programming that allows that to be manifested in the car's decisions, but the idea that programmers are themselves constantly making "moral calls" in the code, rather than the control data, is fiction)
And if it does have such prioritization in its logic, I'd say yeah, it "understands" the world in that respect. Unless you have defined the word "understand" in some mystical way that precludes non-biological machines by definition.
> It seems it would have a way of prioritizing such things.
You are putting the cart before the horse. The problem is not in prioritization, the problem is in having the correct ontology to even get to the "prioritization" stage.
Does the car know what a fruit stand is? Does it know what a gas pump is? Does it know how the fruit stand relates to the gas pump in "expected outcome when being hit by a car"?
If you say "we can program that in", read my post again.
On a level below identifying stop signs and lollipop ladies and push carts, a SDC's stack needs to be able to identify:
1) Driveable areas. If something looks like a cliff maybe don't go there.
2) Fleeting obstacles. Dust blowing in the wind. A stray plastic bag, winging its way northwards to the waiting maw of a baby turtle. A person with a borderline credit score. A stray cat chasing a bug. That sort of thing.
3) Anything else that's physically present in the path of the car. Doesn't matter what it is. Do Not Hit The Thing is the second lesson anyone learns when being taught to drive, after Make It Go So Hit The Thing Is Even An Option.
I would imagine the car has ways of identifying things that are specific hazards, such as a gas pump. Fruit stand is probably categorized as "other."
"the problem is in having the correct ontology to even get to the "prioritization" stage."
That part isn't done by the program, it is done by whoever enters the prioritization numbers. That is, someone, possibly a committee, can dial up the "avoid gas pumps" weighting relative to the "avoid baby stroller" weighting if they are concerned that cars might swerve so widely to avoid coming near a stroller than they are risking hitting a different hazard. Or they can dial up the weight of grocery carts relative to dogs, since children might be in a grocery cart. Etc.
Those are humans, who can do whatever ontological analysis they need when deciding on the the settings. The car doesn't need to access any of that, it just needs a general look up table that can help make optimal decisions based on the human-entered value system.
I mean, you're right; someone making that list might not think to include "centaur", and maybe one Halloween a child is dressed up as one, and the computer vision system interprets the "centaur" as a horse instead of a child, and it makes the wrong decision, but how many centaur-related accidents do you think self-driving cars are going to be involved in each year?
It's completely feasible to imagine writing a list of the top, say, 100 things in the world that a car needs to make morally-significant decisions about, and then deal with every other accident or near-miss after the fact. Interactions with unrecognised objects should be rare enough to be a rounding error when comparing accident rates between autonomous and human-driven miles.
Even if a automated vehicles intelligence misinterpreted a human child to be a horse child, then it should only hit it in the unavoidable circumstance of trying to preserve a human life.
If it’s choosing between hitting a pole and hitting a pony, it should always hit the pole so long as no one is injured.
The real problem is that occasionally these cars today mistake roads for oceans, walls for roads, and people for inanimate poles.
You're asking the car to make moral decisions. Given the choice between hitting a child or and old woman, which will it choose? A bicyclist vs a pedestrian? A lemonade vendor vs a hotdog vendor?
This is all assuming it can distinguish between all of these objects, and that a real person could assign relative moral values to hitting one over the other.
> [...] swerving into an unmanned fruit stand without being able to brake is much better than swerving into an unmanned gas pump.
That's a great example.
Not to mention triggering any Rube Goldberg-machine-like chain reaction (even with just a few steps) where a series of events would need to be predicted.
There are not uncountably many – we live in a finite world governed by understandable physical and civic laws.
There's an actually measurable rate of humans blocking other traffic, and for how long before resolution, and an actually measurable rate of autonomous vehicles blocking other traffic, and for how long before resolved.
If the rate for autonomous vehicles is already below that of humans, or rapidly headed there, that's far more important to note than to theorize about other corner-cases.
(Also, as a San Francisco driver, I have serious doubts about the "general intelligence" of my fellow drivers. I don't see any reasons to hold autonomous cars to a higher standard – perhaps a much higher standard? – than other cars.)
OP was obviously not using that term mathematically (i.e. the cardinality of the power set of natural numbers), and obviously meant something in the neighborhood of "effectively not countable". And, again, in all but the formal mathematical meaning of the word "countable", many things are not countable (e.g. no one will live long enough to count all the natural numbers).
In the real world, assigning an ordinal number to an object/event/thing has a nonzero time cost. And accounting for every situation in software has a much greater cost.
> If the rate for autonomous vehicles is already below that of humans, or rapidly headed there, that's far more important to note than to theorize about other corner-cases.
Okay, but dangerous human drivers get systematically removed from the streets. Are we doing the same for self-driving cars? In this context, does every Tesla count as the same "driver"?
Maybe all Teslas should have their autonomous driving centrally disabled every time one causes an accident, or breaks a law, until that specific issue is fixed. But it would be impossible to run a car company that way, so of course that takes priority over, say, keeping innocents alive, right? /S
While I do agree that even a below average human intelligence can better cope with this kind of situations (also because it has access to many more opportunities to understand, for example we can understand what's going on by seeing at the face and the hand waving and/or yelling if the driver in the other vehicle; humans are quite good at understanding other humans), I don't think it follows that conscience is necessary to achieve that.
Autonomous cars will crash in situations where humans wouldn't, but the opposite is also true. Autonomous cars don't fall asleep, drive drunk, or get distracted on their phones.
Personally I try to stay alert while I drive, so I feel I'm safer driving myself than letting the machine do it. But I'm less confident that the best autonomous cars will have more accidents than humans in general, and they're improving all the time.
People can affix fake sirens to their cars and if they blare them at you you should pull over... after the incident society then throws an incredibly harsh penalty at the offender.
To make sure things run smoothly it benefits society to never doubt or question whether people who say they're police officers actually them - they might have something incredibly important to say (Like, hey, there's an active shootout ahead - please don't keep driving and get yourself injured).
If you as an individual create some funny gag to fake out autonomous vehicles into thinking that you're a cop whether you cheekily do it with "TOTALLY NOT A COP CAR" written on the side to get a laugh or not... you're almost certainly going to be charged with a felony crime.
> needs to have the required behavior programmed in explicitly
This is missing the entire point of ML. ML is literally defined as not having to explicitly program responses in for every situation.
Needing to pull over because a fire truck has told you it's coming your way in 2 minutes is pretty easy compared with some of those other "uncountably many" situations these cars need to deal with.
> the autonomous machine does not understand the complex world it is driving in
Daniel Dennett would like to have a word with you. It's perfectly possible for systems to "understand" things for any useful definition of the word "understand". A calculator absolutely "understands" arithmetic.
> This is missing the entire point of ML. ML is literally defined as not having to explicitly program responses in for every situation.
So how do you know how the system will respond to an arbitrary situation? You could easily argue that we don't know how an arbitrary human will respond to an arbitrary situation, but we have systems in place to deal with the consequences if they handle it badly.
For example, if a driver handles a situation badly enough, they could lose their license. If an autonomous car does something bad enough that a human would have lost their license, what happens? Do all of that company's cars get pulled off the road until the bug is fixed and validated?
> So how do you know how the system will respond to an arbitrary situation?
You put them in that situation and see how they respond. If they respond badly, you keep training them until they respond better. I'm not saying it's easy, but I am saying it's exactly what autonomous-car developers been doing all this time.
> If they respond badly, you keep training them until they respond better.
Right, but what do you do with all the other cars on the road that presumably still have the bad behavior (while the fix is being developed)? Just assume that the situation is rare enough that you'll be able to fix it before it happens again?
You're basically saying test-driven development can find all problems with software, and it's well-known that that isn't the case. It's very dangerous to assume TDD is all that's needed when lives are at stake.
> This is missing the entire point of ML. ML is literally defined as not having to explicitly program responses in for every situation.
I suggest you look up the free lunch theorem of supervised learning.
> A calculator absolutely "understands" arithmetic.
I'm not sure how that's anything else than proving my point. Let's say a calculator "understands" arithmetic. It does not understand anything I would apply those calculations I make with it to. I cannot tell it "calculator, go do my taxes".
Your particular example is not even true: A calculator is able to perform calculations, it does not understand any of the axioms, theorems, and uses around it.
> A calculator is able to perform calculations, it does not understand any of the axioms, theorems, and uses around it.
It understands the axioms because they've been literally built into its tiny brain. It doesn't understand the uses of arithmetic because nobody programmed it to.
Can you provide me a non-vacuous definition of "understand" that doesn't rely on human consciousness being extra-special and magic?
I don't think it's a question of knowing what an axiom is or how a calculator is implemented. I think it's a question of disagreeing on what "understanding" means.
What does it mean to understand something? Obviously (to me and I presume to you) a calculator doesn't understand anything! It doesn't have the capacity for understanding. Obviously (according to, I presume, feoren and Dennett) "understanding" means something very different, and a calculator is perfectly capable of "understanding" arithmetic.
There is no math in a calculator. It’s a pile of logic gates assembled in a way to appear to perform mathematical operations. An ALU has no “understanding” of arithmetic, it’s just a canned, finite set of inputs and outputs. Not an axiom to be found.
The pile of logic gates is an encoding of the axioms. The fact that it evaluates mathematical expressions correctly is both necessary and sufficient to show that it understands arithmetic. Therefore the calculator knows the axioms and understands arithmetic.
Except it doesn’t implement the axioms of math, it implements a crude facsimile of them for a certain subset of numbers, because what it’s really doing is a non-mathematical physical process.
If you want to argue that an ALU is performing “boolean logic” just because it’s made of logic gates, be my guest, but in my opinion that’s a bit like saying a bucket is “doing math” because if you put 5 rocks in and add 7 rocks, it’s smart enough to contain 12 rocks when you’re done.
It is actually "obviously true" unless you believe that human brains have a special metaphysical magic that makes them "more than just a system". That's literally the only alternative: human brains are magic and only humans are ever capable of "understanding". It's a vacuous definition of the word. Systems can understand things, which is good, because the human brain is nothing more.
See: Daniel Dennett's response to Searle's Chinese Room.
I did not claim it's about "metaphysical magic" that machines lack. But I do believe that humans usually have a very rich and multifaceted life outside of driving on a road, compared to cars, and are therefore able to integrate "training data" that cars are not. Unless your plan is to make them an active member of society.
But doing arithmetic and understanding it are different things. The latter is some reflection on the concepts, but the former is just carrying it out. There are many humans that can do arithmetic but don’t understand it
Also I do believe there is a metaphysical difference between a human and a calculator, literally magic
So yes, you might argue that for this particular situation, you "just" need to put in the proper programming and AI/ML training and then "maybe the car will notice more often than a human" as long as the situation is within very specific bounds. At least now that somebody made an article about it.
But it does not change that the autonomous machine does not understand the complex world it is driving in in the slightest, and that, for example, swerving into an unmanned fruit stand without being able to brake is much better than swerving into an unmanned gas pump.