> There’s a classic joke that my brother loves: a software engineer’s partner asks him to go to the store and get milk, and if there are eggs, bring twelve! The engineer comes back with twelve bottles of milk. When asked why, he says “they had eggs”.
Notably, a modern LLM wouldn't make this mistake.
It's not at all clear to me that LLMs are or will become better at translating Python → C than English → C. It makes sense in theory, because programming languages are precise and English is not. In practice, however, LLMs don't seem to have any problem interpreting natural language instructions. When LLMs make mistakes, they're usually logic errors, not the result of ambiguities in the English language.
(I am not talking about the case where you give the LLM a one-sentence description of an app and it fails to lay out every feature as you'd imagined it. Obviously, the LLM can't read your mind! However, writing detailed English is still easier than writing Python, and I don't really have issues with LLMs interpreting my instructions via Genie Logic.)
I would have found this post more convincing if the author could point to examples of an LLM misinterpreting imprecise English.
P.S. I broadly agree with the author that the claim "English will be the only programming language you’ll ever need" is probably wrong.
> In practice, however, LLMs don't seem to have any problem interpreting natural language instructions
I can think of a couple of reasons this may be the case.
1. There is a subset of English that you use unknowingly that has a socially accepted formal definition and so can be used as a substitute for programming language. LLMs have learned this definition. Straying from this subset or expecting a different formal definition will result in errors.
2. The level of detail in your English description is such that ambiguity genuinely does not arise. Unlikely, you would not consider that "natural language".
3. English is not ambiguous when describing program features, and formal definitions can be skipped. Unlikely, because the entire product owner role is built on the frequently exclaimed "that's not what I meant!".
I think its #1, and I think that makes the most sense: through massive statistical data LLMs have learned which natural language instructions cause which modifications in codebases, for a giant amount of generic problems that it has training data on.
The moment you do something new though, all bets are off.
Yeah, the example with the eggs isn't great because an LLM would indeed get the correct interpretation but the thing is, this is based on LLMs having been trained on the context.
When and LLM has the context, it is usually able to correctly fill the gaps of vague English specifications.
But if you are operating at the bleeding edge of innovation or in depths of industry expertise that LLMs didn't train on, it won't be in a position to fill those blanks correctly.
And domains with less training data openly available are areas where innovation and differentiation and business moats live.
Oftentimes, only programming languages are precise enough to specify this type of knowledge.
Notably, a modern LLM wouldn't make this mistake.
It's not at all clear to me that LLMs are or will become better at translating Python → C than English → C. It makes sense in theory, because programming languages are precise and English is not. In practice, however, LLMs don't seem to have any problem interpreting natural language instructions. When LLMs make mistakes, they're usually logic errors, not the result of ambiguities in the English language.
(I am not talking about the case where you give the LLM a one-sentence description of an app and it fails to lay out every feature as you'd imagined it. Obviously, the LLM can't read your mind! However, writing detailed English is still easier than writing Python, and I don't really have issues with LLMs interpreting my instructions via Genie Logic.)
I would have found this post more convincing if the author could point to examples of an LLM misinterpreting imprecise English.
P.S. I broadly agree with the author that the claim "English will be the only programming language you’ll ever need" is probably wrong.