I used to work at NASA as an engineer, and the way it works (E.g space station operations or shuttle missions) was that hundreds of engineers are working on various complex systems on the ground while astronauts try to do a mission in space.
What this means that dozens of procedures and activities are happening at any one time in orbit, and that a Flight Director on the ground and an Astronaut in space needs to be at least cognizant or aware of (at least enough to prevent disasters and complete the tasks) -this is the greatest challenge in on-orbit work.
IE to widen this metaphor: to collect and gather complex operational data on differing parts of a system in a USEFUL way is the greatest challenge of complex work and software engineering is about controlling complexity above all.
Now at NASA, we often wrote up procedures and activities with the- "Astronauts are smart they can grok it" mindset; but during debriefs the common refrain from those at the top of pyramid was that "I don't have the mental capacity to handle and monitor dozens of systems at the same time" -Humans are very bad at getting in flow when monitoring things.. maybe if some kind of flow state was achievable like a conductor over an orchestra in orchestrating agents..but I don't see that happening with multiple parts of the codebase getting altered at the same time by a dozen agents.
Cursor and Agentic tools bring this complexity (and try to tame it through a chat window or text response) to our daily work on our desktop; now we might have dozens of AI Agents working on aspects of your codebase! Yes, its incredible progress but with this amazing technical ability comes great responsibility for the human overseer...this is the 'astronaut' in my earlier metaphor- an overburdened software engineer.
Worryingly also culture wise- management teams now expect software devs to deliver much faster, this is dangerous since we can use these tools but are forced to leave more to autopilot in hopes of catching bugs in test etc - I see that trend is to push away the human oversight into blind agents but this is the wrong model I think for now -how can I trust and agent without understanding all that it did?
To summarize, I like both Cursor and Claude Code, but I think we need better paradigms in terms of UX so that we can better handle conflicts, stupid models, reversions, better windows on what changed code-wise.. I also get the trend of creating trash-able instances in containers and killing them on failure, but we still need to understand how a code change impacts other parts of the codebase -
anyway somebody on the cursor team will not even read this post -they will just summarize the whole HN thread with AI and implement some software tickets to add another checkbox to the chat window in response.. this is not the engineering we need in response to this new paradigm of working.. we need some deep 'human' design thinking here..
What this means that dozens of procedures and activities are happening at any one time in orbit, and that a Flight Director on the ground and an Astronaut in space needs to be at least cognizant or aware of (at least enough to prevent disasters and complete the tasks) -this is the greatest challenge in on-orbit work.
IE to widen this metaphor: to collect and gather complex operational data on differing parts of a system in a USEFUL way is the greatest challenge of complex work and software engineering is about controlling complexity above all.
Now at NASA, we often wrote up procedures and activities with the- "Astronauts are smart they can grok it" mindset; but during debriefs the common refrain from those at the top of pyramid was that "I don't have the mental capacity to handle and monitor dozens of systems at the same time" -Humans are very bad at getting in flow when monitoring things.. maybe if some kind of flow state was achievable like a conductor over an orchestra in orchestrating agents..but I don't see that happening with multiple parts of the codebase getting altered at the same time by a dozen agents.
Cursor and Agentic tools bring this complexity (and try to tame it through a chat window or text response) to our daily work on our desktop; now we might have dozens of AI Agents working on aspects of your codebase! Yes, its incredible progress but with this amazing technical ability comes great responsibility for the human overseer...this is the 'astronaut' in my earlier metaphor- an overburdened software engineer.
Worryingly also culture wise- management teams now expect software devs to deliver much faster, this is dangerous since we can use these tools but are forced to leave more to autopilot in hopes of catching bugs in test etc - I see that trend is to push away the human oversight into blind agents but this is the wrong model I think for now -how can I trust and agent without understanding all that it did?
To summarize, I like both Cursor and Claude Code, but I think we need better paradigms in terms of UX so that we can better handle conflicts, stupid models, reversions, better windows on what changed code-wise.. I also get the trend of creating trash-able instances in containers and killing them on failure, but we still need to understand how a code change impacts other parts of the codebase -
anyway somebody on the cursor team will not even read this post -they will just summarize the whole HN thread with AI and implement some software tickets to add another checkbox to the chat window in response.. this is not the engineering we need in response to this new paradigm of working.. we need some deep 'human' design thinking here..