And was summarily derided for it. Read the replies to his tweet to see some of the suggested counter examples.
I sympathize with Ng here because he's making an effort to describe the boundaries and capabilities of ML to non-technical people in a simple way, and the 1-second heuristic is about as good at that as any I've seen. Yet it's incredibly difficult to delinieate what ML is good and bad at because it's so different than human or animal intelligence!
On the one hand, Modern ML is reasonably good at what Daniel Kahneman calls "System 1" thinking - fast, intuitive, stimulus-response stuff. However, for a lot of this stuff, the AI can do it MUCH faster, sometimes millions of times faster.
On the other, There are many tasks that modern ML can do that humans cannot do, such as learn to see digits even after the pixels have been scrambled in a deterministic way. To the human eye, it always looks like static, but (non-CNN) neural nets don't care and still achieve 99% accuracy on MNIST. A very broad category of things where ML is just plain better than humans is nuanced probability estimates. While human intuition into probability appears to be a pile of fallacies and broken rules of thumb, ML algorithms with appropriate loss functions can achieve very high "calibration" in the sense that the probability estimates they produce closely match the empirical conditional probabilities in real-world data.
>Andrew Ng once tweeted about a 1-second heuristic [..] And was summarily derided for it. Read the replies to his tweet to see some of the suggested counter examples.
Fyi to add some extra context to that "1 sec heuristic"...
I've seen several Andrew Ng presentations over the last several years and in the live talks, he has more of a lead up to that "1 second" idea[0]. He was trying to help project managers at Baidu think about the possibilities of new AI products to build.
Unfortunately, the extreme brevity of that tweet removes all that surrounding context that so it makes him look like an AI crackpot instead of an AI realist. (E.g. The top tweet reply from Pedro Domingos seems to be based on the limitations of current AI but Andrew Ng was trying to convey the idea to PMs of what new AI to build.)
Thank you, good clarification, sorry if I made it sound I was dog-piling on Ng. Andrew Ng works harder at explaining ML in a non-technical way than anyone else I can think of, and the 1-second rule came out of that genuine desire to educate. I've been in the exact same boat: a senior executive who "used to code" trying to get me to explain to him in simple terms what kind of problems ML could solve; I sort of fumbled through it by mentioning the 1 second rule and giving some concrete examples. It's just absurdly hard to put it in non-technical terms.
> Pretty much anything that a normal person can do in <1 sec, we can now automate with AI.
https://twitter.com/andrewyng/status/788548053745569792?lang...
And was summarily derided for it. Read the replies to his tweet to see some of the suggested counter examples.
I sympathize with Ng here because he's making an effort to describe the boundaries and capabilities of ML to non-technical people in a simple way, and the 1-second heuristic is about as good at that as any I've seen. Yet it's incredibly difficult to delinieate what ML is good and bad at because it's so different than human or animal intelligence!
On the one hand, Modern ML is reasonably good at what Daniel Kahneman calls "System 1" thinking - fast, intuitive, stimulus-response stuff. However, for a lot of this stuff, the AI can do it MUCH faster, sometimes millions of times faster.
On the other, There are many tasks that modern ML can do that humans cannot do, such as learn to see digits even after the pixels have been scrambled in a deterministic way. To the human eye, it always looks like static, but (non-CNN) neural nets don't care and still achieve 99% accuracy on MNIST. A very broad category of things where ML is just plain better than humans is nuanced probability estimates. While human intuition into probability appears to be a pile of fallacies and broken rules of thumb, ML algorithms with appropriate loss functions can achieve very high "calibration" in the sense that the probability estimates they produce closely match the empirical conditional probabilities in real-world data.