Thanks! HN was part of the origin story of the book in question.
In 2018 or 2019 I saw a comment here that said that most people don't appreciate the distinction between domains with low irreducible error that benefit from fancy models with complex decision boundaries (like computer vision) and domains with high irreducible error where such models don't add much value over something simple like logistic regression.
It's an obvious-in-retrospect observation, but it made me realize that this is the source of a lot of confusion and hype about AI (such as the idea that we can use it to predict crime accurately). I gave a talk elaborating on this point, which went viral, and then led to the book with my coauthor Sayash Kapoor. More surprisingly, despite being seemingly obvious it led to a productive research agenda.
While writing the book I spent a lot of time searching for that comment so that I could credit/thank the author, but never found it.
“… machine learning everything that focuses on dealing with problems with a complex structure and low noise, and statistics everything that focuses on dealing with problems with a large amount of noise.”
It's hard to miss the similarity between your book's title and Cliff Stoll's 1995 Silicon Snake Oil, an indictment of the general concept of the "information superhighway" that was starting to resonate with the public. Stoll is a really smart guy, but that particular book hasn't held up too well:
Few aspects of daily life require computers...They're
irrelevant to cooking, driving, visiting, negotiating,
eating, hiking, dancing, speaking, and gossiping. You
don't need a computer to...recite a poem or say a
prayer." Computers can't, Stoll claims, provide a richer
or better life.
Our more recent essay (and ongoing book project) "AI as Normal Technology" is about our vision of AI impacts over a longer timescale than "AI Snake Oil" looks at https://www.normaltech.ai/p/ai-as-normal-technology
I would categorize our views as techno-optimist, but people understand that term in many different ways, so you be the judge.
In 2018 or 2019 I saw a comment here that said that most people don't appreciate the distinction between domains with low irreducible error that benefit from fancy models with complex decision boundaries (like computer vision) and domains with high irreducible error where such models don't add much value over something simple like logistic regression.
It's an obvious-in-retrospect observation, but it made me realize that this is the source of a lot of confusion and hype about AI (such as the idea that we can use it to predict crime accurately). I gave a talk elaborating on this point, which went viral, and then led to the book with my coauthor Sayash Kapoor. More surprisingly, despite being seemingly obvious it led to a productive research agenda.
While writing the book I spent a lot of time searching for that comment so that I could credit/thank the author, but never found it.