One of the main issues I had with Claude Code (maybe it‘s the harness?) was that the agent tends to NOT read enough relevant code before it makes a change.
This leads to unnecessary helper functions instead of using existing helper functions and so on.
Not sure if it is an issue with the models or with the system prompts and so on or both.
This may have been fixed as of yesterday... Version 2.0.17 added a built in "Explore" sub-agent that it seems to call quite a lot.
Helps solve the inherent tradeoff between reading more files (and filling up context) and keeping the context nice and tight (but maybe missing relevant stuff.)
I sometimes use it, but I've found just adding to my claude.md something like "if you ever refactor code, try search around the codebase to see if their is an existing function you can use or extend"
> I sometimes use it, but I've found just adding to my claude.md something like "if you ever refactor code, try search around the codebase to see if their is an existing function you can use or extend"
Wouldn't that consume a ton of tokens, though? After all, if you don't want it to recreate function `foo(int bar)`, it will need to find it, which means either running grep (takes time on large codebases) or actually loading all your code into context.
Maybe it would be better to create an index of your code and let it run some shell command that greps your ctags file, so it can quickly jump to the possible functions that it is considering recreating.
Helpfer functions exploded over the last releases, id say? Very often I state: "combine this into one function"
another thing I saw in the last days starting: Claude now draws always an ASCII art instead of a graphical image, and the ASCII art is completely useless, when something is explained
you can put in your rules what type of output to do for diagrams. Personally I prefer mermaid -> it could be rendered into an image, read and modified by AI easily.
I agree, claude is an impressive agent but it seems like it's impatient and trying to make its own thing, tries to make its own tests when I already have them, etc. Maybe better for a new project.
GPT 5 (at least with cline) reads whatever you give it, then laser targets the required changes.
With High, as long as I actually provided enough relevant context it usually one shots the solution and sometimes even finds things I left out.
The only downside for me is it's extremely slow, but I still use it on anything nuanced.
> agree, claude is an impressive agent but it seems like it's impatient and trying to make its own thing, tries to make its own tests when I already have them, etc. Maybe better for a new project.
Nope, Claude will deviate from it's own project as well.
Claude is brilliant but needs hard rules. You have to treat it and make it feel like the robot it really. Feed it a bit too much human prose in your instructions and it will start to behave like a teen.
I regularly use @ key to add files to context for tasks I know require edits or patterns I want claude to follow, adds a few extra key strokes but in most cases the quality improvement is worth it
This leads to unnecessary helper functions instead of using existing helper functions and so on.
Not sure if it is an issue with the models or with the system prompts and so on or both.