It seems like a similar parallel is with the enthusiasm with self-driving cars. There was an initial optimism (or hype) fueled by the success of DL with perception problems. But conflating solving perception with the larger, more general problem of self-driving leads to an overly optimistic bias.
Much of the take-aways from this year's North American International Auto Show was that the manufacturer's are reluctantly realizing the real scope of the problem and trying to temper expectations. [0]
And self-driving cars is still a problem orders of magnitude simpler than AGI.
Re: Comparing self-driving cars to AGI: It's counterintuitive, but depending how versatile the car is meant to be, the problems might actually be pretty close in difficulty.
If the self-driving car has no limits on versatility, then, given an oracle for solving the self-driving car problem, you could use that to build an agent that answers arbitrary YES-NO questions. Namely: feed the car fake input so it thinks it has driven to a fork in the road and there's a road-sign saying "If the answer to the following question is YES then the left road is closed, otherwise the right road is closed."
Compare with e.g. proofs that the C++ compiler is Turing complete. These proofs involve feeding the C++ compiler extremely unusual programs that would never actually come up organically. But that doesn't invalidate the proof that the C++ compiler is Turing complete.
That's the problem with all of the fatuous interpretations floating around of "level 5" self-driving.
"It has to be able to handle any possible conceivable scenario without human assistance" so people ask things like "will a self-driving car be able to change its own tyre in case of a flat" and "will a self-driving car be able to defend the Earth from an extraterrestrial invasion in order to get to its destination".
They need to update the official definition of level 5 to "must be able to handle any situation that an average human driver could reasonably handle without getting out of the vehicle."
(Although the "level 1" - "level 5" scale is a terrible way to describe autonomous vehicles in any case and needs to be replaced with a measure of how long it's safe for the vehicle to operate without human supervision.)
Very well put. And you could argue that it is not as much a stretch as it seems.
Self driving cars would realistically have to keep functioning in situations where arbitrary communication with humans is required (which happens daily), which tends to turn into an AI-hard problem quite quickly.
I was thinking in terms of "minimum viable product" for self-driving cars, which I have a hunch will be of limited versatility compared to what you describe. To have a truly self-driving car as capable as humans in most situations, you may be right.
I know this is meant jokingly, but for many cities (especially relatively remote ones), trains are not considered viable because they have strictly defined routes.
Many cities choose to forego trains for busses in large part due to the lower upfront costs and the ability to change routes as the needs of the populace change.
>And self-driving cars is still a problem orders of magnitude simpler than AGI.
You sure?
It might very well be a single order of magnitude harder, or not any harder. Given that solving all the problems of self driving even delves into questions of ethics at times (who do I endanger in this lose lose situation, etc)
I could certainly be wrong, it's just speculation on my part on the assumption that self-driving issues would be a smaller subset of AGI problems.
I actually don't think the ethics part is all that hard if (and that's a big if) there can be an agreement on a standard approach. An example would be a utilitarian model, but this often is not compatible with egalitarian ethics. This approach reeks of technocracy but it's certainly a solvable problem.
Much of the take-aways from this year's North American International Auto Show was that the manufacturer's are reluctantly realizing the real scope of the problem and trying to temper expectations. [0]
And self-driving cars is still a problem orders of magnitude simpler than AGI.
[0] https://www.nytimes.com/2019/07/17/business/self-driving-aut...