The problem is for a majority of those tasks people conveniently "forget" the actual start and end of the process. LLMs can't start most of those tasks by it's own decision and neither they can't end and evaluate the result of those tasks. Sure, we got automated multiple tasks from a very low percentage to a very high percentage, and that is really impressive. But I don't see how any LLM can bridge that gap from very percent of automation to a strict 100% of automation, for any task. And if a program requires a real intelligence handling and controlling it, is it really AI?