> With AI, I would not be surprised if the quality actually improves and the cost comes down(or stays same). Of course, more bad software will be written by now many aspiring entrepreneurs to realize their dream idea of spotify clone, then sacrificing their life saving on complex cloud bills and ever so profitable rise of revenue of all cloud services citing this as benefit of AI while doing some more layoffs to jack up the stock prices.
At this point we all speculating really. But from logical point of view, LLMs are trained on code written by humans. When more and more code will be written by LLMs instead, models will be trained on content written by other models. It will be very hard to distinguish which code on Github was wrote by human or some model (unless the quality will differ substantially). If this will be the case I would say that quality of code written by them will drop. Or the quality of models will drop. Or code written by model will be still using the pre-LLM patterns, because model-written code will not be part of training data.
It may be that LLM written code will be working but hardly comprehensible for human.
For now models does not have negative feedback loop that humans have ('oh code does not compile' or 'code does compile but throws an exception' or 'code compile and works but perform poorly').
Anyway, I am sure that there will be impact to the whole industry, but I doubt models will be primary source of source code. Helpful tool for sure but not a drop-in replacement for developers.
At this point we all speculating really. But from logical point of view, LLMs are trained on code written by humans. When more and more code will be written by LLMs instead, models will be trained on content written by other models. It will be very hard to distinguish which code on Github was wrote by human or some model (unless the quality will differ substantially). If this will be the case I would say that quality of code written by them will drop. Or the quality of models will drop. Or code written by model will be still using the pre-LLM patterns, because model-written code will not be part of training data. It may be that LLM written code will be working but hardly comprehensible for human. For now models does not have negative feedback loop that humans have ('oh code does not compile' or 'code does compile but throws an exception' or 'code compile and works but perform poorly').
Anyway, I am sure that there will be impact to the whole industry, but I doubt models will be primary source of source code. Helpful tool for sure but not a drop-in replacement for developers.