I find that restricting it to very small modules that are clearly separated works well. It does sometimes do weird things, but I'm there to correct it with my experience.
I just wish I could have competent enough local LLMs and not rely on a company.
The ones approaching competency cost tens of thousands in hardware to run. Even if competitive local models existed would you spend that to run them? (And then have to upgrade every handful of years.)
But it does save me time in many other aspects, so I can't complain.