I think you're misunderstanding my point. I'm not saying I don't know how to use planning modes or iterate on solutions.
Yes, you still decompose problems. But what's the decomposition for? To create sub-problems small enough that the AI can solve them in one shot. That's literally what planning mode does - help you break things down into AI-solvable chunks.
You might say "that's not real thinking, that's just implementation details." Look who came up the the plan in the first place << It's the AI! Plan mode is partial automation of the thinking there too (improving every month)
Claude Code debugs something, it's automating a chain of reasoning: "This error message means execution reached this file. That implies this variable has this value. I can test this theory by sending this HTTP request. The logs show X, so my theory was wrong. Let me try Y instead."
> When I stop the production line to say "wait, let me understand what's happening here," the implicit response is: "Why are you holding up progress? It mostly works. Just ship it. It's not your code anyway."
This is not a technical problem or an AI problem, it’s a cultural problem where you work
We have the opposite - I expect all of our devs to understand and be responsible for AI generated code
To get it done correctly, that's always what it's been about.
I don't feel that code I write without assistance is mine, or some kind of achievement to be proud of, or something that inflates my own sense of how smart I am. So when some of the process is replaced by AI, there isn't anything in me that can be hurt by that, none of this is mine and it never was.
Yes, you still decompose problems. But what's the decomposition for? To create sub-problems small enough that the AI can solve them in one shot. That's literally what planning mode does - help you break things down into AI-solvable chunks.
You might say "that's not real thinking, that's just implementation details." Look who came up the the plan in the first place << It's the AI! Plan mode is partial automation of the thinking there too (improving every month)
Claude Code debugs something, it's automating a chain of reasoning: "This error message means execution reached this file. That implies this variable has this value. I can test this theory by sending this HTTP request. The logs show X, so my theory was wrong. Let me try Y instead."