> I just can't see Kurzweil being in the same league as Peter Norvig.
The problem with Peter Norvig is that he comes from a mathematical background and is a strong defender the use of statistical models that have no biological basis.[1] While they have their use in specific areas, they will never lead us to a general purpose strong AI.
Lately Kurzweil has come around to see that symbolic and bayesian networks have been holding AI back for the past 50 years. He is now a proponent of using biologically inspired methods similar to Jeff Hawkins' approach of Hierarchical Temporal Memory.
Hopefully, he'll bring some fresh ideas to Google. This will be especially useful in areas like voice recognition and translation. For example, just last week, I needed to translate. "I need to meet up" to Chinese. Google translates it to 我需要满足, meaning "I need to satisfy". This is where statistical translations fail, because statistics and probabilities will never teach machines to "understand" language.
For several hundred years, inventors tried to learn to fly by creating contraptions that flapped their wings, often with feathers included. It was only when they figured out that wings don't have to flap and don't need feathers that they actually got off the ground.
It's still flight, even if it's not done like a bird. Just because nature does it one way doesn't mean it's the only way.
(On a side note, multilayer perceptrons aren't all that different from how neurons work - hence the term "artificial neural network". But they also bridge to a pure mathematical/statistical background. The divide between them is not clear-cut; the whole point of mathematics is to model the world.)
Nobody knows how neurons actually work: http://www.newyorker.com/online/blogs/newsdesk/2012/11/ibm-b.... We are missing vital pieces of information to understand that. Show me your accurate C. Elegans simulation and I will start to believe you have something.
Perhaps in a hundred years, this is the argument: for several hundred years, inventors tried to learn to build an AI by creating artificial contraptions, ignoring how biology worked, inspired by an historically fallacious anecdote about how inventors only tried to learn to fly by building contraptions with flapping wings. It was only when they figured out that evolution, massively parallel mutation and selection, is actually necessary that they managed to build an AI.
> For several hundred years, inventors tried to learn to fly by creating contraptions that flapped their wings...
To quote Jeff Halwkings
"This kind of ends-justify-the-means interpretation of functionalism leads AI researchers astray. As Searle showed with the Chinese Room, behavioral equivalence is not enough. Since intelligence is an internal property of a brain, we have to look inside the brain to understand what intelligence is. In our investigations of the brain, and especially the neocortex, we will need to be careful in figuring out which details are just superfluous "frozen accidents" of our evolutionary past; undoubtedly, many Rube Goldberg–style processes are mixed in
with the important features. But as we'll soon see, there is an underlying elegance of great power, one that surpasses our best computers, waiting to be extracted
from these neural circuits.
...
For half a century we've been bringing the full force of our species' considerable cleverness to trying to program intelligence into computers. In the process we've come up with word processors, databases, video games, the Internet, mobile phones, and convincing computer-animated dinosaurs. But intelligent machines still aren't anywhere in the picture. To succeed, we will need to crib heavily from
nature's engine of intelligence, the neocortex. We have to extract intelligence from within the brain. No other road will get us there. "
As someone with a strong background in Biology who took several AI classes at an Ivy League school, I found all of my CS professors had a disdain for anything to do with biology. The influence of these esteemed professors and the institution they perpetuate is what's been holding the field back. It's time people recognize it.
> As Searle showed with the Chinese Room, behavioral equivalence is not enough.
The Chinese Room experiment doesn't show only that. It also shows how important is the inter-relationship that exists between the component parts of a system.
We're reducing the Chinese Room to the Chinese and the objects they are using such as a lookup table. But what we're missing is the complex pattern between the answers, the structure and mutual integration that exists in their web of relations.
If we could reduce a system to its parts our brains would be just a bag of neurons, not a complex network. We'd get to the conclusion that brains can't possibly have consciousness on account that there is no "consciousness neuron" to be found in there. But consciousness emerges from the inter-relations of neurons and the Chinese Room can understand Chinese on account of its complex inner structure which models the complexity of the language itself.
I'll bite. Tell us, concretely, what is to be gained from a biological approach.
Honestly I imagine we'd find more out from philosophers helping to spec out what a sentient mind actually is than we would from having biologists trying to explain imperfect implementations of the mechanisms of thought.
I'm short on time, so please forgive my rushed answer.
It will deliver on all of the failed promises of past AI techniques. Creative machines that actually understand language and the world around it. The "hard" AI problems of vision and commonsense reasoning will become "easy". You need to program a computer the logic that all people have hands or that eyes and noses are on faces. They will gain this experiences and they learn about our world, just like their biological equivalent, children.
Here's some more food for thought from Jeff Hawkins:
"John Searle, an influential philosophy professor at the University of California at Berkeley, was at that time saying that computers were not, and could not be,
intelligent. To prove it, in 1980 he came up with a thought experiment called the Chinese Room. It goes like this:
Suppose you have a room with a slot in one wall, and inside is an English-speaking person sitting at a desk. He has a big book of instructions and all the pencils and scratch paper he could ever need. Flipping through the book, he sees that the instructions, written in English, dictate ways to manipulate, sort, and compare Chinese characters. Mind you, the directions say nothing about the meanings of the Chinese characters; they only deal with how the characters are to be copied, erased, reordered, transcribed, and so forth.
Someone outside the room slips a piece of paper through the slot. On it is written a story and questions about the story, all in Chinese. The man inside doesn't speak
or read a word of Chinese, but he picks up the paper and goes to work with the rulebook. He toils and toils, rotely following the instructions in the book. At times
the instructions tell him to write characters on scrap paper, and at other times to move and erase characters. Applying rule after rule, writing and erasing
characters, the man works until the book's instructions tell him he is done. When he is finished at last he has written a new page of characters, which unbeknownst
to him are the answers to the questions. The book tells him to pass his paper back through the slot. He does it, and wonders what this whole tedious exercise has
been about.
Outside, a Chinese speaker reads the page. The answers are all correct, she notes— even insightful. If she is asked whether those answers came from an intelligent mind that had understood the story, she will definitely say yes. But can
she be right? Who understood the story? It wasn't the fellow inside, certainly; he is ignorant of Chinese and has no idea what the story was about. It wasn't the book,
15 which is just, well, a book, sitting inertly on the writing desk amid piles of paper.
So where did the understanding occur? Searle's answer is that no understanding did occur; it was just a bunch of mindless page flipping and pencil scratching. And
now the bait-and-switch: the Chinese Room is exactly analogous to a digital computer. The person is the CPU, mindlessly executing instructions, the book is
the software program feeding instructions to the CPU, and the scratch paper is the memory. Thus, no matter how cleverly a computer is designed to simulate
intelligence by producing the same behavior as a human, it has no understanding and it is not intelligent. (Searle made it clear he didn't know what intelligence is;
he was only saying that whatever it is, computers don't have it.)
This argument created a huge row among philosophers and AI pundits. It spawned hundreds of articles, plus more than a little vitriol and bad blood. AI defenders
came up with dozens of counterarguments to Searle, such as claiming that although none of the room's component parts understood Chinese, the entire room as a whole did, or that the person in the room really did understand Chinese, but
just didn't know it. As for me, I think Searle had it right. When I thought through the Chinese Room argument and when I thought about how computers worked, I didn't see understanding happening anywhere. I was convinced we needed to understand what "understanding" is, a way to define it that would make it clear when a system was intelligent and when it wasn't, when it understands Chinese
and when it doesn't. Its behavior doesn't tell us this.
A human doesn't need to "do" anything to understand a story. I can read a story quietly, and although I have no overt behavior my understanding and comprehension are clear, at least to me. You, on the other hand, cannot tell from
my quiet behavior whether I understand the story or not, or even if I know the language the story is written in. You might later ask me questions to see if I did,
but my understanding occurred when I read the story, not just when I answer your questions. A thesis of this book is that understanding cannot be measured by external behavior; as we'll see in the coming chapters, it is instead an internal metric of how the brain remembers things and uses its memories to make predictions. The Chinese Room, Deep Blue, and most computer programs don't have anything akin to this. They don't understand what they are doing. The only way we can judge whether a computer is intelligent is by its output, or behavior.
First, I don't feel this answers angersock's question concerning concrete applications of cognitive neuroscience to artificial intelligence.
Second, despite running into it time and again over the years, Searle's Chinese room argument still does not much impress me. It seems to me clear that the setup just hides the difficulty and complexity of understanding in the magical lookup table of the book. Since you've probably encountered this sort of response, as well as the analogy from the Chinese room back to the human brain itself, I'm curious what you find useful and compelling in Searle's argument.
I remain interested in biological approaches to cognition and the potential for insights from brain modelling, but I don't see how it's useful to disparage mathematical and statistical approaches, especially without concrete feats to back up the criticism.
Traditional AI has had 1/2 a century of failed promises. Jeff's numenta had a major shakeup over this very topic and has only been working with biological inspired AI for the past 3 years. Kurzwell also has just recently come around. Comparing Grok to Watson is like putting a yellow belt up against Bruce lee. Give it some time to catch up
In university I witnessed first hand the insitutional discrimination against biological neural nets. My ordinal point was that google could use the fresh blood and ideas.
You took the wrong lesson from the Chinese Room. behavioral equivalence is enough, and the Chinese Room shows that behavioral equivalence isn't possible to achieve through hypothetical trivial implementations like "a room full of books with all the Chinese-English translations"
" the use of statistical models that have no biological basis."
this is irrelevant
This is like saying a computer using an x86 processor is different, from the point of view of the user than an ARM computer, beyond differences in software
Or like saying DNA is needed for "data storage" in biological systems and not another technology
Sure, you can get inspiration from biology, but doesn't necessarily mean you have to copy it.
""I need to meet up" to Chinese. Google translates it to 我需要满足, meaning "I need to satisfy". This is where statistical translations fail, "
It's not really a fault of statistical translations (more likely quality of data issue), even though it has its limitations. Besides, google translation has been successful exactly because it's better than other existing methods (and Google has the resources, both in people and data to make it better)
I think that the google translator did pretty good on that fragment.
Garbage in, garbage out! If you use 'I' in a sentence fragment when you mean to use 'We' then you can't really blame the translator for getting it wrong.
'We need to meet up' is a sentence with a completely different meaning from the incorrect and semantically confusing 'I need to meet up', it really does sound as if you need to meet up to some expectation.
In further defense of Google, "I need to meet up with him" translates as 我需要与他见面.
If someone wants to attack Google's Chinese translation, it should be over snippets like 8十多万 or its failure to recognize many personal and place names which could easily be handled by a pre-processor. Google has never been competent in China in part because of their hiring decisions, but this isn't Franz Och's fault.
Obviously Google translate is not error free, nor is any statistical translation system going to be comparable to a human translator in the very near future, but you're underestimating the current development of statistical translation. Granted, I'm not a native speaker but I think "I need to meet up" is not even a sentence with proper grammar. Underlying model probably predicted something like meeting (satisfying) requirements due to the lack of an object in the sentence and context. Situations like this where the input is very short and noisy is obviously going to be a weakness of statistical systems for a long time to come, but looking at technologically how far we are from mastering biological systems, I think it's safe to say this is going to be the way of doing it for a while, and will be very successful in translating properly structured texts if proper context can be provided. Currently statistical translations have (almost) no awareness of context beside some phrase-based or hierarchical models. Many people are probably not factoring in the fact, that with exponentially more data, and exponentially higher computing power, a model can utilize the context of a whole book while translating just a sentence from that book - which is actually still much less than what human translators utilize in terms of context. While translating a sentence, I might even have to utilize what was on the news the night before to infer the correct context. We are currently definitely far from feeding this kind of information to our models, so I'd say this kind of criticism towards statistical translation is very unfair.
"We need to meet up" also translates incorrectly "我们需要满足". In fact, I did not originally use a fragment, I wrote a full sentence that Google repeatedly incorrectly translated. I only used a fragment here to simply my example.
To avoid the wrath of the Google fan boys, a better example would have been the pinnacle of statistical AI :
The category was "U.S. Cities" and the clue was: "Its largest airport is named for a World War II hero; its second largest for a World War II battle." The human competitors Ken Jennings and Brad Rutter both answered correctly with "Chicago" but IBM's supercomputer Watson said "Toronto."
Once again, Watson, a probability based system failed where real intelligence would not.
Google has done an amazing job, with their machine translation considering they cling to these outdated statistical methods. And just like with speech recognition has found out over the last 20 years, they will continue to get diminishing returns until they start borrowing from
nature's own engine of intelligence.
You are exhibiting a deep misunderstanding of human intelligence.
Ken Jennings thought that a woman of loose morals could be called a "hoe" (with an "e", which makes no sense!), when the correct answer was "rake". Is Ken Jennings therefor inhuman?
The problem with Peter Norvig is that he comes from a mathematical background and is a strong defender the use of statistical models that have no biological basis.[1] While they have their use in specific areas, they will never lead us to a general purpose strong AI.
Lately Kurzweil has come around to see that symbolic and bayesian networks have been holding AI back for the past 50 years. He is now a proponent of using biologically inspired methods similar to Jeff Hawkins' approach of Hierarchical Temporal Memory.
Hopefully, he'll bring some fresh ideas to Google. This will be especially useful in areas like voice recognition and translation. For example, just last week, I needed to translate. "I need to meet up" to Chinese. Google translates it to 我需要满足, meaning "I need to satisfy". This is where statistical translations fail, because statistics and probabilities will never teach machines to "understand" language.
[1] http://www.tor.com/blogs/2011/06/norvig-vs-chomsky-and-the-f...