There are ways to fight it though. Look at Linux kernel for instance - they have been overwhelmed with poor contributions long before LLMs. The answer is to maintain standards that put as much burden on the contributor as possible, and normalizing unapologetic "no" from reviewers.
Does that work as well with non-strangers who are your coworker? I'm not sure.
Also if you're organizationally changing the culture to force people to put more effort in writing the code, why are you even organizationally using LLMs...?
> Does that work as well with non-strangers who are your coworker?
Yeah, OK, I guess you have to be a bit less unapologetic than Linux kernel maintainers in this case, but you can still shift the culture towards more careful PRs I think.
> why are you even organizationally using LLMs
Many people believe LLMs make coders more productive, and given the rapid progress of gen AI it's probably not wise to just dismiss this view. But there need to be guardrails to ensure the productivity is real and not just creating liability. We could live with weaker guardrails if we can trust that the code was in a trusted colleague's head before appearing in the repo. But if we can't, I guess stronger guardrails are the only way, aren't they?
I don’t want to just dismiss the productivity increase. I feel 100% more productive on throw away POCs and maybe 20% more productive on large important code bases.
But when I actually sit down and think it through, I’ve wasted multiple days chasing down subtle bugs that I never would have introduced myself. It could very well be that there’s no productivity gain for me at all. I wouldn’t be at all surprised if the numbers showed that was the case.
But let’s say I am actually getting 20%. If this technology dramatically increases the output of juniors and mid level technical tornadoes that’s going to easily erase that 20% gain.
I’ve seen codebases that were dominated my mid level technical tornadoes and juniors, no amount of guardrails could ever fix them.
Until we are at the point where no human has to interact with code (and I’m skeptical we will ever get there short of AGI) we need automated objective guardrails for “this code is readable and maintainable”, and I’m 99.999% certain that is just impossible.
My point in that second question was: Is the human challenge of getting a lot of inexperienced engineers to fully understand the LLM output actually worth the time, effort and money to solve vs sticking to solving the technical problems that you're trying to make the LLM solve?
Usually organizational changes are massive efforts. But I guess hype is a hell of an inertia buster.
The change is already happening. People graduating now are largely "AI-first", and it's going to be even worse if you listen to what teachers tell. And management often welcomes it too. So you need to deal with it one way or another.
It's measurable in the number of times you have to spend >x minutes to help them go through something they should have written up by themselves. You can count the number of times you have to look at something and tell them "do it again, but without LLM this time". At some point you fire them.
My opinion on someone is how I decide whether I want to work with them and help them grow or fire them/wait for them to fail on their own merit (if somebody else is in charge of hiring/firing).