> It seems like with GitHub is aiming for is a future where "what the business wants" can just be expressed in natural language the same way you might explain to a human developer what you want to build.
We've been there before with 4GL in many forms, they all failed on the same principle: it requires reasoning to understand the business needs and translate that into a model made in code.
LLMs might be closer to that than other iterations of technology attempting the same but they still fail in reasoning, they still fail to understand imprecise prompts, correcting it is spotty when the complexity grows.
There's a gap that LLMs can fill but that won't be a silver bullet. To me LLMs have been extremely useful to retrieve knowledge I already had (syntax from programming languages I stopped using a while ago; techniques, patterns, algorithms, etc. that I forgot details about) but every single time I attempted to use one to translate thoughts into code it failed miserably.
It does provide a lot in terms of railroading knowledge into topics I know little about, I can prompt one to give me a roadmap of what I might need to learn on a given topic (like DSP) but have to double-check the information against sources of truth (books, the internet). Same for code examples for a given technique, it can be a good starting point to flesh out the map of knowledge I'm missing.
Any other case I tried to use it professionally it breaks down spectacularly at some point. A friend who is a PM and quite interested in all GenAI-related stuff has been trying to hone in prompts that could generate him some barebones application to explore how it could be used to enhance his skills, it's been 6 months and the furthest he got is two views of the app and saving some data through Core Data on iOS, something that could've been done in an afternoon by a mid-level developer.
I agree that we're far off from such a future, but it does seem plausible. Although I wouldn't be surprised to find that when and if we get there, that the underlying technology looks very different from the LLMs of today.
> something that could've been done in an afternoon by a mid-level developer
I think that's pretty powerful in itself (the 6 months to get there notwithstanding). I expect to see such use cases become much more accessible in the near future. Being able to prototype something with limited knowledge can be incredibly useful.
I briefly did some iOS development at a startup I worked at. I started with literally zero knowledge of the platform and what I came up with barely worked, but it was sufficient for a proof of concept. Eventually, most of what I wrote was thrown out when we got an experienced iOS dev involved. I can imagine a future where I would have been completely removed from the picture at the business folks just built the prototype on their own. Failing that, I would have at least been able to cobble something together much more quickly.
We've been there before with 4GL in many forms, they all failed on the same principle: it requires reasoning to understand the business needs and translate that into a model made in code.
LLMs might be closer to that than other iterations of technology attempting the same but they still fail in reasoning, they still fail to understand imprecise prompts, correcting it is spotty when the complexity grows.
There's a gap that LLMs can fill but that won't be a silver bullet. To me LLMs have been extremely useful to retrieve knowledge I already had (syntax from programming languages I stopped using a while ago; techniques, patterns, algorithms, etc. that I forgot details about) but every single time I attempted to use one to translate thoughts into code it failed miserably.
It does provide a lot in terms of railroading knowledge into topics I know little about, I can prompt one to give me a roadmap of what I might need to learn on a given topic (like DSP) but have to double-check the information against sources of truth (books, the internet). Same for code examples for a given technique, it can be a good starting point to flesh out the map of knowledge I'm missing.
Any other case I tried to use it professionally it breaks down spectacularly at some point. A friend who is a PM and quite interested in all GenAI-related stuff has been trying to hone in prompts that could generate him some barebones application to explore how it could be used to enhance his skills, it's been 6 months and the furthest he got is two views of the app and saving some data through Core Data on iOS, something that could've been done in an afternoon by a mid-level developer.