The gap between "AI is a 90% solution" and "100% required for production" is enormous.
In my bubble, AI-generated code is maybe 70% useful, often less. The remaining 30% isn't minor polish—it's:
Understanding system architecture constraints
Handling edge cases AI doesn't know exist
Debugging when AI-generated code breaks in production
Knowing when AI's "solution" creates more problems than it solves
That last 30% is what separates engineers from prompt writers. And it's not getting smaller—if anything, it's growing as systems get more complex.
Every new product, integration, or business line introduces new edge cases, dependencies, and coordination paths. What starts as a clean architecture turns into a network of overlapping constraints - legacy data formats, different latency expectations, regulatory quirks, “temporary” patches that become permanent.
You can manage complexity for a while, but you can’t eliminate it. Every layer that simplifies work for one team usually adds hidden coupling for another. Over time, the system stops being a single design and becomes an ecosystem of compromises.
Understanding system architecture constraints Handling edge cases AI doesn't know exist Debugging when AI-generated code breaks in production Knowing when AI's "solution" creates more problems than it solves
That last 30% is what separates engineers from prompt writers. And it's not getting smaller—if anything, it's growing as systems get more complex.