> AI are better than most humans at dealing with human suckage
That is a valid opinion, but subjective. If I say that they're not better, we're going to be exchanging anecdotes and getting nowhere.
Hence, the need for a less subjective way of evaluating AI's abilities.
> Making a self driving car fail like a human... "drunk" and "tired"
You don't understand.
It's not about making them present the same failure rate or personality defects as a human. Of course we want self-driving cars to make less errors and be better than us.
However, when they fail, we want them to fail like a good sane human would instead of hallucinating jibberish that could catch other humans off guard.
Simplifying, It's better to have something that works 95% of the time, and hallucinates in predictable ways 5% of the time than having something that works 99% of the time but hallucinates catastrophically in that 1%.
Stick to the more objective side of the discussion, not this anecdotal subjective talk that leads nowhere.
That is a valid opinion, but subjective. If I say that they're not better, we're going to be exchanging anecdotes and getting nowhere.
Hence, the need for a less subjective way of evaluating AI's abilities.
> Making a self driving car fail like a human... "drunk" and "tired"
You don't understand.
It's not about making them present the same failure rate or personality defects as a human. Of course we want self-driving cars to make less errors and be better than us.
However, when they fail, we want them to fail like a good sane human would instead of hallucinating jibberish that could catch other humans off guard.
Simplifying, It's better to have something that works 95% of the time, and hallucinates in predictable ways 5% of the time than having something that works 99% of the time but hallucinates catastrophically in that 1%.
Stick to the more objective side of the discussion, not this anecdotal subjective talk that leads nowhere.