GPT-4 has no understanding of logic what-so-ever, let's stop pretending it does.
If it gives you a solution that is wrong, you have to point it at, then it will give you a second version , if that is also wrong, it will then slightly modify the same solutions over and over again instead of actually fixing the issue.
It gets stuck in a loop of giving you 2-3 versions of the same solution with the slightly different outputs.
It's only useful for boilerplate code and even then, you have to clean it up..
If it gives you a solution that is wrong, you have to point it at, then it will give you a second version , if that is also wrong, it will then slightly modify the same solutions over and over again instead of actually fixing the issue.
It gets stuck in a loop of giving you 2-3 versions of the same solution with the slightly different outputs.
It's only useful for boilerplate code and even then, you have to clean it up..