Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Someone is still gonna have to convert what the business wants into instructions for the LLM

It seems like with GitHub is aiming for is a future where "what the business wants" can just be expressed in natural language the same way you might explain to a human developer what you want to build. I would agree that right now, LLMs generally don't do well with very high-level instructions, but I'm sure that will improve over time.

As for the security concerns, I think that's a fair point. However, as LLMs become more efficient, it they become easier to deploy on-prem, that mitigates one significant class of concerns. You could also reasonably make the argument that LLMs are more likely to write insecure code. I think that's true with respect to a senior dev, but I'm not so sure with junior folks.



> It seems like with GitHub is aiming for is a future where "what the business wants" can just be expressed in natural language the same way you might explain to a human developer what you want to build.

We've been there before with 4GL in many forms, they all failed on the same principle: it requires reasoning to understand the business needs and translate that into a model made in code.

LLMs might be closer to that than other iterations of technology attempting the same but they still fail in reasoning, they still fail to understand imprecise prompts, correcting it is spotty when the complexity grows.

There's a gap that LLMs can fill but that won't be a silver bullet. To me LLMs have been extremely useful to retrieve knowledge I already had (syntax from programming languages I stopped using a while ago; techniques, patterns, algorithms, etc. that I forgot details about) but every single time I attempted to use one to translate thoughts into code it failed miserably.

It does provide a lot in terms of railroading knowledge into topics I know little about, I can prompt one to give me a roadmap of what I might need to learn on a given topic (like DSP) but have to double-check the information against sources of truth (books, the internet). Same for code examples for a given technique, it can be a good starting point to flesh out the map of knowledge I'm missing.

Any other case I tried to use it professionally it breaks down spectacularly at some point. A friend who is a PM and quite interested in all GenAI-related stuff has been trying to hone in prompts that could generate him some barebones application to explore how it could be used to enhance his skills, it's been 6 months and the furthest he got is two views of the app and saving some data through Core Data on iOS, something that could've been done in an afternoon by a mid-level developer.


I agree that we're far off from such a future, but it does seem plausible. Although I wouldn't be surprised to find that when and if we get there, that the underlying technology looks very different from the LLMs of today.

> something that could've been done in an afternoon by a mid-level developer

I think that's pretty powerful in itself (the 6 months to get there notwithstanding). I expect to see such use cases become much more accessible in the near future. Being able to prototype something with limited knowledge can be incredibly useful.

I briefly did some iOS development at a startup I worked at. I started with literally zero knowledge of the platform and what I came up with barely worked, but it was sufficient for a proof of concept. Eventually, most of what I wrote was thrown out when we got an experienced iOS dev involved. I can imagine a future where I would have been completely removed from the picture at the business folks just built the prototype on their own. Failing that, I would have at least been able to cobble something together much more quickly.


> It seems like with GitHub is aiming for is a future where "what the business wants" can just be expressed in natural language the same way you might explain to a human developer what you want to build.

I do agree that this is their goal but I expect that expressing what you want the computer to do in natural language is still going to be done by programmers.

Similar to how COBOL is closer to natural language than assembly and as such more people can write COBOL programs, but you still need the same skills to phrase what you need in a way the compiler (or in the future, the LLM) can understand, the ability to debug it when something goes wrong, etc.

“Before LLM, chop wood, carry water. After LLM, chop wood, carry water.”

As for the security stuff, on premise or trusted cloud deployments will definitely solve a lot of the security issues but I think it will be a long time before conservative businesses embrace them. For people in college now, most of them who end up working at non-tech companies won’t be using LLM’s regularly yet.


SQL and python are arguably the languages closest to English, and even then getting someone to understand recursion is difficult. How do you specify that some values should be long lived? How do you specify exponential retries. Legalese tries to be as specific as possible without being formal and even then you need a judge on a case. Maybe when everyone has today’s datacenter compute power in their laptop.


> arguably the languages closest to English

Yes, but they're not English. All the concerns that you mention are ones that I think LLM development tools are aiming to eliminate from explicit consideration. Ideally, a user of such a tool shouldn't even have to have ever heard of recursion. I think we're a long way off from that future, but it does feel possible.


Have you ever actually tried getting proper non-contradictory requirements in pain natural language from anyone?

Good luck


This is absolutely a skill in itself. It could well be the case that such a plain expression of requirements in natural language is a valuable skill that enables use of such tools in the future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: