> but these types of prompts and approaches are why I believe so many people think these models aren't useful.
100% agree. The prompt is a 'yolo prompt'. For that task you need to give it points in what to do so it can deduce it's task list, provide files or folders in context with @, tell it how to test the outcome so it knows it has succeeded and closing the feedback loop, and guide it in implementation either via memory or via context with which existing libs or methods it should call on.
For greenfield tasks and projects I even provide architectural structure, interfaces, etc.
After reading twitter, reddit and hn complaints about models and coding tools I've come to the same conclusion as you.
That fact is pretty useless to draw any useful conclusions from with one random not so great example. Yes, it's an experiment and we got a result. And now what? If I want reliable work results I would still go with the strategy of being as concrete as possible, because in all my AI activities, anything else lets the results be more and more random. Anything non-standard (like, you could copy & paste directly from a Google or SO result), no matter how simple, I better provide the base step by step algorithm myself and only leave actual implementation to the AI.
> For that task you need to give it points in what to do so it can deduce it's task list, provide files or folders in context with @…
- and my point is that you do not have to give ChatGPT those things. GP did not, and they got the result they were seeking.
That you might get a better result from Claude if you prompt it 'correctly' is a fine detail, but not my point.
(I've no horse in this race. I use Claude Code and I'm not going to switch. But I like to know what's true and what isn't and this seems pretty clear.)
100% agree. The prompt is a 'yolo prompt'. For that task you need to give it points in what to do so it can deduce it's task list, provide files or folders in context with @, tell it how to test the outcome so it knows it has succeeded and closing the feedback loop, and guide it in implementation either via memory or via context with which existing libs or methods it should call on.
For greenfield tasks and projects I even provide architectural structure, interfaces, etc.
After reading twitter, reddit and hn complaints about models and coding tools I've come to the same conclusion as you.