We are paying for the incredible bamboozle that is the phrase "Machine Learning." If we used computerized statistical inference instead and the phrase "machine learning" did not exist the attitude to people from investors to regulators, from customers, vendors, doom sayers and boosters alike would be vastly better taken as whole.
Nearly everyone here knows mostly when seeing AI written or hearing it that it's a total crock. Nearly everyone here knows ML is applied statistics done with a computer but this not common knowledge and it really should be.
"AI" as a term deserves this rep, because it was effectively marketing as far back as the 70s. But you're way off on "Machine Learning".
There's been plenty of progress in the last 15 years re-interpreting many ML methods as regression (any optimization is a regression if you set up the right likelihood function).
But many important results and techniques -- including today's ubiquitous deep nets -- originated and had successful applications way before they had statistical interpretations. They came from fields like compression theory, database design, or even biological interpretations.
The term Machine Learning was introduced to re-focus the field on a measurable objective: algorithms that improve with more data. The "Learning" part was not an abstract term to tug on your imagination, but included formal definitions of how algorithms improve that involved slightly fewer assumptions than statistical learning (which is a subfield).
This lineage isn't that important today, but that focus on how learning is measured is still the most important guidepost both for ML research and for sorting out marketing BS from realistic claims. Certainly, state of the art work using deep nets for tasks like NLP, image and video recognition aren't designed by reasoning about the statistical interpretation, or tested by applying typical statistical tests. Popularizing this work as Statistical Inference or Regression wouldn't give any added intuition and wouldn't really describe the way ML research proceeds, or how ML systems succeed or fail.
It works by fitting curves. Whether you have a (presumably mathematical) "Statistical interpretation" or not is basically irrelevant in terms of what it actually does and what we should be conveying to people who aren't knowledgeable of the field. This is not about an academic argument.
Putting stats right there in the name is vastly, vastly more informative than "Learning" which has the nuance for 99% of people as something requiring intelligence and is misleading. Hence the AI cons all pop up immediately there are some public ML wins called "Learning."
Generalizing from data is actually what statistics does. It's what ML is. People like Hinton, Wasserman, Tibrishani et al seem to agree that ML is statistics but even that isn't what I'm talking about here.
The term "machine learning" fits the field, but the venn diagram of "what those two words could mean in english" versus "what the term means in the field" is a huge circle enclosing a tiny subset.
It's way too broad, and a term that naturally lent itself to a far more narrow interpretation by people first finding it wouldn't have this problem.
---
It's fascinating to me, as someone that works with (rudimentary, non-ML) game AI, that - until recently, nobody really even tried doing game AIs that even "trained their heuristics". Like, I get how AIs couldn't form a general plan or any of that, but I was shocked, as an adult, to learn that i.e. FPS AIs were too dumb to even take "guesstimate" values like how much they needed to lead a shot (i.e. honing ballistics calculations), and at least train the aiming value for that based on inputs and success/failure criterion. As a kid, the obviousness of the idea, and triviality of how much effort it ought to take (surely a couple of hours, tops?) had me convinced that of course everybody was doing that.
Once I became an adult, I learned the bitter truth that even banally simple ideas are shockingly difficult to put into practice. The devil's in the details.
I see these dismissals as "it's just statistics" often, and I don't get where they come from. If anything maybe it's "just" stochastic gradient descent, but there is a distinct "learning" pareto ML that does not obviously follow out of statistics. You could argue it's just addition, subtraction, multiplication, division and root extraction too, but that is a pointless reduction that doesnt help understand what's going on.
Statistics isn't "just statistics." There is no "dismissal" of it. It's a hugely powerful tool. It can be used incredibly badly, and result in evil.
People have an idea what a statistical analysis is and basing decisions on it. Eg Gambling. That is what ML /is/. It's not some incredible computer brain thinking learning magic pixie dust. You know that. I know that. Everybody who knows what ML is knows that. It's a minute proportion of the world. This is the data we need to learn from.
See ML as distinct from stats all you like, go nuts. Take it up with Hinton, Wasserman, Murphy, Tibrishani & Hastie and so on. Your understanding is different from theirs which could well make your textbook a ground breaking best seller.
Nearly everyone here knows mostly when seeing AI written or hearing it that it's a total crock. Nearly everyone here knows ML is applied statistics done with a computer but this not common knowledge and it really should be.