Unsurprising. It's a hype bubble; it seems like only VCs and executives haven't realized that yet.
It has however become a very good metric for spotting low quality technical leadership. If an executive or similar is talking about "AI" or "Machine Learning" without the ability to identify specific use cases that they're hoping to implement, then that is a huge red flag. AI, as much as it actually exists, is a tool to be applied to an end, not magic pixie dust to be sprinkled onto your product to make it more profitable.
I agree with this at face value, but I want to push back a bit so that I can sharpen my own argument. How can I argue that "AI" is a hype bubble to a layman while simultaneously acknowledging that we've seen the development of self-driving cars, music identification, Siri and Google Assistant, search by image, etc.?
Obviously this isn't all "AI" in the historical CS sense of the word. It's a mix of progress in things like signal processing, computer vision, NLP, neural nets and transformers, and the raw computer engineering that has made all of that practical on modern hardware.
Are you and I sticks-in-the-mud for not just going along with calling this "AI"? Is there actual worthwhile nuance by calling things their proper names rather than just kowtowing to the word that normal people have latched onto?
1) It’s entirely probable that self driving cars are over hyped, and that it’ll be decades before we hit level IV or V self driving cars. Lane keeping is nice, but it’s hardly revolutionary.
2) Audio and image identification is indeed an area where great strides have been made, and if those are relevant to your business, then you’re in great luck, and you should absolutely leverage that. But this goes back toward my “it should be aimed at a specific use case” statement; image identification is much much more narrow than how AI is being hyped.
3) Personally I’m less than impressed with the progress that voice assistants have made; they present great, but they have utterly failed to make a dent in my day to day living. It seems like end users regularly oscillate between “this is amazing”, “this is stupid”, and “this is incredibly creepy”. In my opinion this area saw a great big leap forward about 5-6 years ago and progress has basically stopped.
If you read back about older AI bubbles, what you’ll find is that there are bits and pieces left over from those bubbles that are unquestionably improvements on what came before. But critically these improvements fell well short of the grandiose promises made by AI boosters at the time. It’s entirely probable that this AI boom will follow a similar pattern; a massive cycle of hype and collapse that leaves behind useful techniques and tools that nonetheless fall well short of what was promised during the cycle.
As far as what is and is not considered to be “AI”; that definition has changed readily over the years. Ground breaking techniques are regularly called “AI” until they stop being called that and get new names. We’re already seeing this process work now with ML.
I think there are the AI leaders, and then there is everyone else. What is the difference?
The leaders are mostly big tech, who have driven the step-change advances you describe. This, IMHO, was due to their pre-existing mastery of data. They already had a ton of well-organized data, because they were engineering cultures, and data was the lifeblood of their business (ads, search, shopping). Once ML/AI came into the picture, it was full steam ahead.
Most others are (blind) followers, and cannot tease apart the engineering bit from the (data) science hype. They get fixated on the latter (data science and ML/AI) and forget the engineering, or "scale" part.
The first question should not be about AI/ML, but on the other hand, do you have solid (data) engineering where your data is easily accessible to any data scientist? By now it should be apparent that "data is the new oil" and will be useful even if you don't plan to do deep learning.
If you don't have solid (data) engineering and "data at scale" for anyone, anywhere, then your ML/AI efforts are doomed.
It seems to me the hype bubble is centered around things that require GAI to solve. Progress in more general methods for weak AIs has led to confusion about what is possible with current methods.
In the traditional CS sense, the things you mention could all be considered weak AI.
I predict as we continue to transition to stronger AIs with more advanced capabilities we will continue to see a shift in what people think of as “AI”.
Sorry but it's just wrong to call it a hype bubble because there are real and tangible uses of ML in production today on massive scaled systems that give extreme outsized results. All of GAAFM rely on ML at scale at this point for a large subset of their products.
It's just that these results are also EXTREMELY unevenly distributed - and good luck breaking into anything that can use applied ML and not get crushed by GAAFM.
However I agree with your point here:
>If an executive or similar is talking about "AI" or "Machine Learning" without the ability to identify specific use cases that they're hoping to implement, then that is a huge red flag.
So it's not as simple as "AI is Hype" - it's not hype - it's just that most organizations will struggle to actually implement it because all the data/talent/compute etc... sits in GAAFM.
It’s a hype bubble due to the delta between what’s being promised, a revolution of basically every aspect of business, and what’s delivered, massive improvements in very specific domains where a large data set is available and easily classified.
It’s unquestionable that there are certainly areas that ML is delivering in spades, but it’s nowhere near as ubiquitous as the hype implies.
Even for the behemoth companies that are able to harness AI, it seems like the domains are a) heavily constrained b) fault tolerant. For example, voice assistants - they have very limited capabilities and consumers will accept pretty poor performance. Look at the errors in Google's attempts to automatically answer questions in searches.
Do you have any examples of domains where a FAANG has operationalized AI/ML outside of consumer products?
Any insight into the actual methodology? I couldn't find specifics, but I would be curious what their baseline condition is.
I wonder if the baseline case is "no control optimization" or if it was based on current control best-practices. For example, one article claims it produces cooler water temperature than normal based on outside conditions. This is a best practice in good energy management through wet-bulb outdoor air temperature reset strategies without using ML. If their 40% savings was above and beyond these best practices, that's a pretty big accomplishment. If it's based on the static temperature setpoint scenario (i.e. non best practice), it's less so.
Edit: after skimming [1], it seems like their baseline condition was the naive/non-best practice approach. I'm not discounting the potential for ML, but I think a more accurate comparison should use traditional "best practice" control strategies, not a naive baseline condition. In some cases, it seems like the ML approach identified would be less advantageous than current non-ML best-practices (e.g., increasing cooling tower water by a static 3deg rather than tracking with a wet-bulb temperature offset)
"In fact, the model’s first recommendation for achieving maximum energy conservation was to shut down the entire facility, which, strictly speaking, wasn’t inaccurate but wasn’t particularly helpful either."
> if you showed a google home to someone in 1980s they would be absolutely floored.
I am not so sure about that. If you came from a time machine and said "This is an AI from the year 2020", they would try and converse with it and quickly realize it's unable to converse. People from the 80's would probably assume by the year 2020 they'd have sentient robots and be disappointed when all it can do is turn on the lights when asked a specific way.
Well FAANG all produce consumer products, so that wipes out a bazillion legitimate applications, but you've still got that Facebook and Google sell ads, which uses AI for targeting. Data centre cooling was already mentioned, but did you know lithography now uses ML? There's even work on using ML for place and route.
The VC business model seems to be mostly be "finding the greater fool during IPO", they probably know very well that most of those things are buzzwords.
This is sort of like all new tech and domain application. The application tree first grows in breadth, then depth. I'm guessing real use cases will take 5-20 years to settle.
It’s like all new tech, if you fall into the survivorship bias trap and only look at the ones that have survived.
Personally I think this will be another “AI winter”, where we will gain some improvements in narrow domains but fall far short of what was promised, leading to disillusionment and a reduction in research budgets.
Like all hype bubbles except blockchain, AI will be big in 10 years, it’s just hype because it’s too early. Web 2.0, e-commerce, mobile, etc all grew into their hype.
Funny thing I: I added some AI to my CS curriculum in Uni. They said back then: "20 years ago, they made a lot of AI predictions, but they obviously did not happen, but 20 years from _now_ : oh boy". That was 20 years ago, and well, history seems to repeat itself.
We'll see. Maybe with Moore's law alikes pushing the processing capacities and a separate 10x improvement in the mechanics (ie 'how' AI learns).
Machine translation, search, recommendation, face/object detection, image processing are just a subset of very real technologies broadly deployed today that benefit from machine learning (a subset of AI).
AI does suffer from the problem where the term seems to be defined as “what we can't do now”. That said, I think those are also good examples of why it's important to be realistic – good translation has been significant, especially for travelers, but search, recommendations, face detection, etc. have been modest incremental improvements. That's great to have but a lot of people aren't content with that billing and create problems by overselling it as game-changing.
Exactly. - "In my experience and opinion, as soon as something is well defined it becomes Artificial Narrow Intelligence - Vision systems, Natural Language Processing, Machine Learning, Neural Networks - it loses the name "AI" and gets its own specific name."
Face detection is really valuable to me. I have cameras on the front of my house and I get an alert on my phone when someone unrecognized comes up to the house. It sometimes false triggers on me and my wife but it let us know 1) when a package is delivered 2) when our kids wander out to the front yard 3) hasn't happened yet, but if a sketchy person came onto our property.
Yes, and it's gotten better than the older CV approaches. My point is that it's nice but it didn't significantly transform your life the same way, say, a smartphone did.
It's funny how people attributed a ton of value to block chains and crypto currency, when most of crypto's value as an alternative to government fiat only came about because of its ability to mask illicit payments for drugs, weapons and contract killings. People went all in on Bitcoin in 2017, when the FBI had shut down Silk Road, the main Bitcoin mover for a long time, in 2014.
Two years ago all I heard in the health care industry was how awesome blockchain would be for our industry. Lots of presentations by big knockers in the company, big time investments and initiatives like our company was going to be the "tech leader" in the space, blah blah blah.
Two years on and you can't find any mention of it on any of the internal websites, all the PPT's and videos have been scrubbed from the NAS drives they were once promoting people to go watch and learn from. All the supposed "partnerships" have been dissolved, or never took off in the first place.
Two years later and blockchain is nothing but a memory at the company I work at. Just as fast as it arrived, it disappeared without so much as a trace of it ever existing in the first place.
VR isn't a hype bubble at all though. It's just that graphics tech has only recently gotten powerful enough to pull it off, but it's still expensive. It'll come down in price and get better in quality, while probably staying relatively small due to the setups needed. It's just like any high-end gaming setups. Most people are fine with simple gaming on their phone (Angry Birds or whatever is popular these days), some want consoles, some want powerful PCs, some are satisfied with Oculus Quest type VR devices, and some want quite powerful PCs for VR. VR is growing way too slowly and steadily to really characterize as "hype".
VR is already widely used in industrial training. Send a forklift driver through VR training and the safety lessons might stick a little bit better, so your insurance company will cut you a discount and it can pay for itself.
Indeed. That being said, I think it has even more potential in making more adaptable simulators of things that we couldn't simulate well or dynamically enough before.
Games are huge and, as a result, VR as a vehicle for delivering games will be huge. IMHO the world-changer is going to be AR, and the two will likely retain many complementary objectives in hardware/software/skillsets.
It has however become a very good metric for spotting low quality technical leadership. If an executive or similar is talking about "AI" or "Machine Learning" without the ability to identify specific use cases that they're hoping to implement, then that is a huge red flag. AI, as much as it actually exists, is a tool to be applied to an end, not magic pixie dust to be sprinkled onto your product to make it more profitable.