you have decades of experience of reviewing code produced at industrial scale to look plausible, but with zero underlying understanding, mental model or any reference to ground truth?
glad I don't work where you do!
it's actually even worse than that: the learning process to produce it doesn't care about correctness at all, not even slightly
the only thing that matters is producing plausible enough looking output to con the human into pressing "accept"
(can you see why people would be upset about feeding output generated by this process into a security critical piece of software?)
The statement that correctness plays no role in the training process is objectively false. It's untrue for text LLMs, even more so for code LLMs. Correct would be that the training process and the architecture of LLMs cannot guarantee correctness.
From decades of experience, quite honestly.