I would completely disagree. I use LLMs daily for coding. They are quite far from AGI and it does not appear they are replacing Senior or Staff Engineers any time soon. But they are incredible machines that are perfectly capable of performing some economically valuable tasks in a fraction of the time it would have taken a human. If you deny this your head is in the sand.
Capable, yeah, but not reliable, that's my point. They can one shot fantastic code, or they can one shot the code I then have to review and pull my hair out over for a week, because it's such crap (and the person who pushed it is my boss, for example, so I can't just tell him to try again).
It's not much better with planning either. The amount of time I spent planning, clarifying requirements, hand-holding implementation details always offset any potential savings.
Actually, yeah, a couple of times, but that was a rubber-ducky approach; the AI said something utterly stupid, but while trying to explain things, I figured it out. I don't think an LLM has solved any difficult problem for me before. However, I think I'm likely an outlier because I do solve most issues myself anyways.