I don't think that's achievable with all the science fiction surrounding "AI" specifically. You wouldn't be "reclaiming" the term, you'd be conquering an established cultural principality of emotionally-resonant science fiction.
Which is, of course, the precise reason why stakeholders are so insistent on using "AI" and "LLM" interchangeably.
Personally I think the only reasonable way to get us out of that psycho-linguistic space is just say "LLMs" and "LLM agents" when that's what we mean (am I leaving out some constellation of SotA technology? no, right?)
I personally regard posterior/score-gradient/flow-match style models as the most interesting thing going on right now, ranging from rich media diffusers (the extended `SDXL` family tree which is now MMDiT and other heavy transformer stuff rapidly absorbing all of 2024's `LLM` tune ups) all the way through to protein discovery and other medical applications (tomography, it's a huge world).
LLM's are very useful, but they're into the asymptote of expert-labeling and other data-bounded stuff (God knows why the GB200-style Blackwell build-out is looking like a trillion bucks when Hopper is idle all over the world and we don't have a second Internet to pretrain a bigger RoPE/RMSNorm/CQA/MLA mixture GPT than the ones we already have).
Which is, of course, the precise reason why stakeholders are so insistent on using "AI" and "LLM" interchangeably.
Personally I think the only reasonable way to get us out of that psycho-linguistic space is just say "LLMs" and "LLM agents" when that's what we mean (am I leaving out some constellation of SotA technology? no, right?)