There are times when writing the code is a bottleneck. It's not everyday code. You don't quite know how to write the code. Whatever you try breaks somehow, and you don't readily understand how, even though it is deterministic and you have a 100% repro test case.
An example of this is making changes to a self-hosting compiler. Due to something you don't understand, something is mistranslated. That mistranslation is silent though. It causes the compiler to mistranslate itself. That mistranslated compiler mistranslates something else in a different way, unrelated to the initial mistranslation. Not just any something else is mistranslated, but some rarely occurring something else. Your change is almost right: it does the right thing with numerous examples, some of them complicated. Making your change in the 100% correct way which doesn't cause this problem is like a puzzle to work out.
LLM AI is absolutely worthless in this type of situation because it's not something you can wing from the training data. It's not a verbal problem of token manipulation. Sure, if you already know how to code this correctly, then you can talk the LLM through it, but it could well be less effort just to do the typing.
However, writing everyday, straightforward code is in fact the bottleneck for every single one of the LLM cheerleaders you encounter on social networks.
An example of this is making changes to a self-hosting compiler. Due to something you don't understand, something is mistranslated. That mistranslation is silent though. It causes the compiler to mistranslate itself. That mistranslated compiler mistranslates something else in a different way, unrelated to the initial mistranslation. Not just any something else is mistranslated, but some rarely occurring something else. Your change is almost right: it does the right thing with numerous examples, some of them complicated. Making your change in the 100% correct way which doesn't cause this problem is like a puzzle to work out.
LLM AI is absolutely worthless in this type of situation because it's not something you can wing from the training data. It's not a verbal problem of token manipulation. Sure, if you already know how to code this correctly, then you can talk the LLM through it, but it could well be less effort just to do the typing.
However, writing everyday, straightforward code is in fact the bottleneck for every single one of the LLM cheerleaders you encounter on social networks.