Any computer smart enough to pass the turing test would be so bored talking to the average human that it'd give up and choose failure.
When we think of smart computers, we think the computer should be smart enough to answer our questions. But our questions, the majority of them, are answered with facts.
A better test of computer intelligence would be to build a computer that could /ask/ questions. In fact, the more questions a computer can ask in a conversation the smarter it is.
Intelligence isn't so much about what we know, but about what we know we /don't/ know. At some point in your evolution as an individual, the amount you know will surpass most of those around you. You'll find that your questions go mostly unanswered. If they are smart, people will simply respond to your questions with, "I don't know, that's a good question." If they are not smart, they'll answer them with the wrong answer. You'll know it's wrong because you already thought of that...
An AI that can take a person to the edge of their knowledge, or that can take /many/ people to the edge of their knowledge, their personal knowledge, is what we are after.
However, an AI like this will never pass the Turing test because people will know it's too smart to be human. Instead, for the AI to pass the test, it'll have to, like the LOLBOT, dumb itself down to the point that a human will not be confused.
It's this ability for the AI to consciously "trick" humans that is important for passing the Turing test. Don't be too smart, but don't be too dumb. Just be smart enough to ask questions, but not too many questions.
I read an article a while back about a computer that simply repeated everything the human said back to them, but as a question. This computer convinced many participants into believing it was real.
At one point in the experiment, the article mentioned a particular participant that when approached by one of the researchers replied to the order of, "Go away, this is a private conversation!"
When we think of smart computers, we think the computer should be smart enough to answer our questions. But our questions, the majority of them, are answered with facts.
A better test of computer intelligence would be to build a computer that could /ask/ questions. In fact, the more questions a computer can ask in a conversation the smarter it is.
Intelligence isn't so much about what we know, but about what we know we /don't/ know. At some point in your evolution as an individual, the amount you know will surpass most of those around you. You'll find that your questions go mostly unanswered. If they are smart, people will simply respond to your questions with, "I don't know, that's a good question." If they are not smart, they'll answer them with the wrong answer. You'll know it's wrong because you already thought of that...
An AI that can take a person to the edge of their knowledge, or that can take /many/ people to the edge of their knowledge, their personal knowledge, is what we are after.
However, an AI like this will never pass the Turing test because people will know it's too smart to be human. Instead, for the AI to pass the test, it'll have to, like the LOLBOT, dumb itself down to the point that a human will not be confused.
It's this ability for the AI to consciously "trick" humans that is important for passing the Turing test. Don't be too smart, but don't be too dumb. Just be smart enough to ask questions, but not too many questions.
I read an article a while back about a computer that simply repeated everything the human said back to them, but as a question. This computer convinced many participants into believing it was real.
At one point in the experiment, the article mentioned a particular participant that when approached by one of the researchers replied to the order of, "Go away, this is a private conversation!"