And was summarily derided for it. Read the replies to his tweet to see some of the suggested counter examples.
I sympathize with Ng here because he's making an effort to describe the boundaries and capabilities of ML to non-technical people in a simple way, and the 1-second heuristic is about as good at that as any I've seen. Yet it's incredibly difficult to delinieate what ML is good and bad at because it's so different than human or animal intelligence!
On the one hand, Modern ML is reasonably good at what Daniel Kahneman calls "System 1" thinking - fast, intuitive, stimulus-response stuff. However, for a lot of this stuff, the AI can do it MUCH faster, sometimes millions of times faster.
On the other, There are many tasks that modern ML can do that humans cannot do, such as learn to see digits even after the pixels have been scrambled in a deterministic way. To the human eye, it always looks like static, but (non-CNN) neural nets don't care and still achieve 99% accuracy on MNIST. A very broad category of things where ML is just plain better than humans is nuanced probability estimates. While human intuition into probability appears to be a pile of fallacies and broken rules of thumb, ML algorithms with appropriate loss functions can achieve very high "calibration" in the sense that the probability estimates they produce closely match the empirical conditional probabilities in real-world data.
>Andrew Ng once tweeted about a 1-second heuristic [..] And was summarily derided for it. Read the replies to his tweet to see some of the suggested counter examples.
Fyi to add some extra context to that "1 sec heuristic"...
I've seen several Andrew Ng presentations over the last several years and in the live talks, he has more of a lead up to that "1 second" idea[0]. He was trying to help project managers at Baidu think about the possibilities of new AI products to build.
Unfortunately, the extreme brevity of that tweet removes all that surrounding context that so it makes him look like an AI crackpot instead of an AI realist. (E.g. The top tweet reply from Pedro Domingos seems to be based on the limitations of current AI but Andrew Ng was trying to convey the idea to PMs of what new AI to build.)
Thank you, good clarification, sorry if I made it sound I was dog-piling on Ng. Andrew Ng works harder at explaining ML in a non-technical way than anyone else I can think of, and the 1-second rule came out of that genuine desire to educate. I've been in the exact same boat: a senior executive who "used to code" trying to get me to explain to him in simple terms what kind of problems ML could solve; I sort of fumbled through it by mentioning the 1 second rule and giving some concrete examples. It's just absurdly hard to put it in non-technical terms.
>In contrast, take music recommendations. If I give you a series of suggested songs, can you detect a pattern?
It seems to me that there has been a lot of success in recommendation engines (i.e. Spotify, the "Netflix Algorithm", etc). Maybe not so much on a smaller scale, but large companies are using recommendation engines all the time to keep user's engaged.
Other than that, I agree with the rest of the post. The eagerness to throw ML at any and everything can be kind of exhausting. When used right, it can be super powerful but sometimes a simpler method that achieves 80% of the results may be enough for certain use cases.
> It seems to me that there has been a lot of success in recommendation engines
Not as much as you might think:
https://apenwarr.ca/log/20190201
"Mainstream movies are specially designed to be inoffensive to just about everyone. My Netflix recommendations screen is no longer "Recommended for you," it's "New Releases," and then "Trending Now," and "Watch it again." "
"As promised, Netflix paid out their $1 million prize to buy the winning recommendation algorithm, which was even better than their old one. But they didn't use it, they threw it away."
I definitely got a lot of songs from it that were spot-on, but I also got a ton that were not-as-enjoyable covers of songs I already had. I don't get why they keep doing that. It turned me off.
Yeah same here! The daily mixes that Spotify makes and groups based on my different listening habits has been pretty great. I get a ton of value out of it.
> Pretty much anything that a normal person can do in <1 sec, we can now automate with AI.
https://twitter.com/andrewyng/status/788548053745569792?lang...
And was summarily derided for it. Read the replies to his tweet to see some of the suggested counter examples.
I sympathize with Ng here because he's making an effort to describe the boundaries and capabilities of ML to non-technical people in a simple way, and the 1-second heuristic is about as good at that as any I've seen. Yet it's incredibly difficult to delinieate what ML is good and bad at because it's so different than human or animal intelligence!
On the one hand, Modern ML is reasonably good at what Daniel Kahneman calls "System 1" thinking - fast, intuitive, stimulus-response stuff. However, for a lot of this stuff, the AI can do it MUCH faster, sometimes millions of times faster.
On the other, There are many tasks that modern ML can do that humans cannot do, such as learn to see digits even after the pixels have been scrambled in a deterministic way. To the human eye, it always looks like static, but (non-CNN) neural nets don't care and still achieve 99% accuracy on MNIST. A very broad category of things where ML is just plain better than humans is nuanced probability estimates. While human intuition into probability appears to be a pile of fallacies and broken rules of thumb, ML algorithms with appropriate loss functions can achieve very high "calibration" in the sense that the probability estimates they produce closely match the empirical conditional probabilities in real-world data.