> 1. “AI” (i.e. large ML model) -driven features are in demand
No, there’re not. People with influence or who have invested in the space say that these features are in demand/the next big thing. In reality, I haven’t seen a single user interview where the person actively wanted or was even excited about AI.
Photoshop now has a bunch of features that get used in professional environments. And in the end user space, facial recognition or magic eraser are features in apps like Google Photos that people actively use and like. People probably don't care that it's AI under the hood, in fact they probably don't even realize.
There is a lot of unchecked hype, but that doesn't mean there is no substance.
When people say AI, they refer to LLMs. Your examples are models in general which have been around for a lot longer before the OpenAI and techbros had the AGI wet dream.
I didn't make any assertion about AI, only about "AI" (note the quotes in my GP comment) — i.e. the same old machine-learning-based features like super-resolution upscaling, patch-match, etc, that people have been adding to image-editing software for more than a decade now, but which now get branded as "AI" because people recognize them by this highly-saturated marketing term.
Few artists want generative-AI diffusion models in their paint program; but most artists appreciate "classical" ML-based tools and effects — many of which they might not even think of as being ML-based. Because, until recently, "classical ML" tools and effects have been things run client-side on the system, and so necessarily small and lightweight, only being shipped if they'll work on the lowest-common-denominator GPU (esp. "amount of VRAM") that artists might be using.
The interesting thing is that, due to the genAI craze, GPU training and inference clusters have been highly commoditized / brought into reach for the developers of these "classical ML" models. You don't need to invest in your own hyperscale on-prem GPU cluster to train models bigger than fit on a gaming PC any more. And this has led to increased interest in, and development of, larger "classical ML" models, because now they're not so tightly-bounded by running client-side on lowest-common-denominator hardware. They can instead throw (time on) a cloud GPU cluster to train their model; and then expect the downstream consumer of that model (= a company like Canva) to solve the problem of running the resulting model not by pushing back for something size-optimized to be run locally on user machines, but rather by standing up an model-inference-API backend running it on the same kind of GPU IaaS infra that was used to train it.
Is the same algorithm that allows AI/LLM-enabled apps to remove things from photos? Example: Remove the person who accidentally appeared on the left side of the photo.
My friend, my writing isn't AI slop. It's Adderall slop. AI slop is far more structured.
(Also, because I assume this is your issue: em-dash is option-shift-hyphen on US-English Mac keyboard. I've been using them in my writing for 25 years now, and I won't stop just because LLMs got ahold of them.)
The image generation models have been super useful for anyone wanting to deliver any sort of production content for years. Ofc nobody _promotes_ that. Using ai images is like taking photos as reference for collages. Anyone with a subscription to an image bank is likely happy enough to minibanana some generic references.
No, there’re not. People with influence or who have invested in the space say that these features are in demand/the next big thing. In reality, I haven’t seen a single user interview where the person actively wanted or was even excited about AI.