I don't know. My personal take is that low-code/no-code tools should have ushered in an era of homemade software, but it didn't. It's something I think about a lot, incidentally. We've had the technology to make software using a GUI rather than a text editor for a very long time, and yet programmers still use text editors, and programming in general hasn't really been all that democratized. At best, it's now possible to create a website without knowing how to code, but it usually isn't a particularly good one.
A simple explanation is that the devil is in the details when it comes to implementation. Edge cases and granular behavior are hard to capture except in granular snippets of code. I'm not convinced that LLMs necessarily solve this problem. Even if you are working with a theoretically quite competent LLM, there are still going to be instances where describing what you want is actually challenging, to the point where it would be easier if you just did it yourself. You could argue that this doesn't matter in the case of simple software, but I think we underestimate what people really expect out of something that's "simple", and I think we underestimate people's tendency to grow bored of old toys, especially if they don't work as expected.
If anything, my belief is that LLMs by themselves aren't necessarily going to make this a reality. You need an accessible and empowering UX/UI to go along with it. Otherwise, asking an LLM to build software for you probably won't be much of a fun experience aside from those who are AI enthusiasts foremost.
Side note, I have painful feelings about so many UX researchers I used to admire jumping on board the AI hype train so uncritically. I kind of get it, their job is to speculate on new possibilities a tech offers without getting too hung up on external complications. Still, I feel disillusioned. It seems that prior to all of this these same people were questioning our implicit assumptions with how we interact with computers in really interesting ways (in the same vein as Bret Victor). Now, their takes are starting to converge with that of the usual anonymous midwit AI enthusiast on Twitter who pivoted from crypto.
Put more bluntly, the idea that LLMs will usher in a golden age of people making simple software is kind of a boring speculative future, a speculative future shared and talked about by the most uninspired and boring people on Twitter.
The major fallacy in low/no code and proposing LLMs as programming tools is thinking that you can automate away complexity.
Any automation will introduce complexity. There's no free simplicity lying around. We all need to take low entropy energy sources and dissipate it to get work done. If you don't want to reap barley by hand, you need an entire supply chain for machinery. The complexity can be hidden, you can pay people off to deal with it. But someone will have to manage it.
People wanna buy machinery and have technical support. They don't want to build machinery and maintain it. People want working software, they don't want to design, build and maintain it. That complexity never goes away.
> there are still going to be instances where describing what you want is actually challenging, to the point where it would be easier if you just did it yourself
Yeah exactly. I've seen meetings between engineers get into a level of technical specificity where I've had the thought "we are just coding in english now."
Even if you're able to communicate your intent with natural language rather than a "programming language," you still arrive at a level of concrete detail that is difficult or impossible to describe without some sort of standard technical shorthand or jargon. This is true whether the "listener" is an LLM, some other sophisticated machine interpreter, or just another person.
Lately, with all this talk of natural-language-as-programming-language, I've tried reflecting on my own code and seeing how I might convey the same intent, but in English.
It's made me realize that it's usually the rule that intent at least feels easier to convey through code rather than through natural language. This is especially true if I'm trying to perform some complex mathematical task that's geometric in nature. Put another way, it's probably easier to understand how poisson disc sampling works if you saw the code for it, vs if you read the actual paper, especially if the code had comments providing additional context for different blocks of code. Doesn't help that the paper uses complex mathematical terminology, whereas the code, at worst, might have very terse variable names
And yet, I think a better example might be found in markup languages like HTML. Trying to convey the layout of a webpage using ONLY natural language seems really hard compared to using a markup language.
A simple explanation is that the devil is in the details when it comes to implementation. Edge cases and granular behavior are hard to capture except in granular snippets of code. I'm not convinced that LLMs necessarily solve this problem. Even if you are working with a theoretically quite competent LLM, there are still going to be instances where describing what you want is actually challenging, to the point where it would be easier if you just did it yourself. You could argue that this doesn't matter in the case of simple software, but I think we underestimate what people really expect out of something that's "simple", and I think we underestimate people's tendency to grow bored of old toys, especially if they don't work as expected.
If anything, my belief is that LLMs by themselves aren't necessarily going to make this a reality. You need an accessible and empowering UX/UI to go along with it. Otherwise, asking an LLM to build software for you probably won't be much of a fun experience aside from those who are AI enthusiasts foremost.
Side note, I have painful feelings about so many UX researchers I used to admire jumping on board the AI hype train so uncritically. I kind of get it, their job is to speculate on new possibilities a tech offers without getting too hung up on external complications. Still, I feel disillusioned. It seems that prior to all of this these same people were questioning our implicit assumptions with how we interact with computers in really interesting ways (in the same vein as Bret Victor). Now, their takes are starting to converge with that of the usual anonymous midwit AI enthusiast on Twitter who pivoted from crypto.
Put more bluntly, the idea that LLMs will usher in a golden age of people making simple software is kind of a boring speculative future, a speculative future shared and talked about by the most uninspired and boring people on Twitter.