Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Accountants thought spreadsheets would kill their profession, instead demand for them exploded.

Compilers made it much easier to code compared to writing everything in Assembly. Python made it much easier to code than writing C. Both increased the demand for coders.

Code is a liability, not an asset. The fact that less technical people and people who are not trained engineers can now make useful apps by generating millions of lines of code is also only going to increase the need for professional software engineers.

If you're doing an HTML or even React boot camp, I think you'd be right to be a bit concerned about your future.

If you're studying algorithms and data structures and engineering best practices, I doubt you have anything to worry about.



I've seen it already. A small business owner (one man show) friend of mine with zero developer experience was able to solve his problem (very custom business specific data -> calendar management) in a rough way using ChatGPT. But it got past about 300 lines long and really started to get bad. He'd put dozens of hours of time on his weekend to getting it to where it was by using ChatGPT over and over, but eventually it stopped being able to make the highly specific changes he needed as he used it more and more. He came to me for some help and I was able to work through it a bit as a friend, but the code quality was bad and I had to say "to really do this, I'd need to consult, and there's probably a better person to hire for that."

He's muddling along but is looking for low cost devs to contract with on it now that he's getting value out of it though. And I suspect that kind of story will continue quite a bit as the tech matures.


> And I suspect that kind of story will continue quite a bit as the tech matures.

Don't you think that this tech can only get better? And that there will come a time in the very near future when the programming capabilities of AI improve substantially over what they are now? After all, AI writing 300 line programs was unheard of a mere 2 years ago.

This is what I think GP is ignoring. Spreadsheets couldn't to do every task an accountant can do, so they augmented their capabilities. Compilers don't have the capability to write code from scratch, and Python doesn't write itself either.

But AI will continually improve, and spread to more areas that software engineers were trained on. At first this will seem empowering, as they will aid us in writing small chunks of code, or code that can be easily generated like tests, which they already do. Then this will expand to writing even more code, improving their accuracy, debugging, refactoring, reasoning, and in general, being a better programming assistant for business owners like your friend than any human would.

The concerning thing is that this isn't happening on timescales of decades, but years and months. Unlike GP, I don't think software engineers will exist as they do today in a decade or two. Everyone will either need to be a machine learning engineer and directly work with training and tweaking the work of AI, and then, once AI can improve itself, it will become self-sufficient, and humans will only program as a hobby. Humans will likely be forbidden from writing mission critical software in health, government and transport industries. Hardware engineers might be safe for a while after that, but not for long either.


You are extrapolating from when we saw huge improvements 1-2 years ago. Performance improvements have flatlined. Current AI predictions reminds me of self-driving car hype from the mid 2010s


> Performance improvements have flatlined.

Multimodality, MoE, RAG, open source models, and robotics, have all been/seen massive improvements in the past year alone. OpenAI's Sora is a multi-generational leap over anything we've seen before (not released yet, granted, but it's a real product). This is hardly flatlining.

I'm not even in the AI field, but I'm sure someone can provide more examples.

> Current AI predictions reminds me of self-driving car hype from the mid 2010s

Ironically, Waymo's self-driving taxis were launched in several cities in 2023. Does this count?

I can see AI skepticism is as strong as ever, even amidst clear breakthroughs.


You make claims of massive improvements but as an end user I have not experienced such. With the amount of fake and cherrypicked demos in the AI space I dont believe anything until I experience it myself.

>Ironically, Waymo's self-driving taxis were launched in several cities in 2023. Does this count?

No because usage is limited to a tiny fraction of drive-able space. More cherrypicking.


Just because you haven't used text generation with practically unlimited context windows, insight extraction from personal data, massively improved text-to-image, image-to-image and video generation tools, and ridden in an autonomous vehicle, doesn't mean that the field has stagnated.

You're purposefully ignoring progress, and gating it behind some arbitrary ideals. That doesn't make your claims true.


No. The progress is not being ignored. Normal people just have a hard time getting excited for something that is not useful yet. What you are doing here is the equivalent of popular science articles about exciting new battery tech - as long as it doesn’t improve my battery life, I don’t care. I will care once it hits the shelves and is useful to me, I do not care about your list of acronyms.


I was arguing against the claim that progress has flatlined, and when I gave concrete examples of recent developments that millions of people are using today, you've now shifted the goalpost to "normal" people being excited about it.

But sure, please tell me more about how AI is a fad.


You are seeing enemies where there are none. I am merely commenting on AI evangelists insisting I have to be psyched about every paper and poc long before it ever turns into a useable product that impacts my life. I don’t care about the internals of your field, nobody does. Achieve results and I will gladly use them.


We've entered the acceleration age


> But AI will continually improve

There is a bit of a fallacy in here. We don’t know how far it will improve, and in what ways. Progress isn’t continuous and linear, it comes more in sudden jumps and phases, and often plateaus for quite a while.


Fair enough. But it's enough of a fallacy as asserting that it won't improve, no?

The rate of improvement in the last 5 years hasn't stopped, and in fact has accelerated in the last two. There is some concern that it's slowing down as of 2024, but there is a historically high amount of interest, research, development and investment pouring into the field that it's more reasonable to expect further breakthroughs than not.

If nothing else, we haven't exhausted the improvements from just throwing more compute at existing approaches, so even if the field remains frozen, we are likely to see a few more generational leaps still.


AI can’t write its own prompts. 10k people using the same prompt who actually need 5000 different things.

No improvements to AI will let it read vague speakers’ minds. No improvement to AI will let it get answers it needs if people don’t know how to answer the necessary questions.

Information has to come from somewhere to differentiate 1 prompt into 5000 different responses. If it’s not coming from the people using the AI, where else can it possibly come from?

If people using the tool don’t know how to be specific enough to get what they want, the tool won’t replace people.

s/the tool/spreadsheets

s/the tool/databases

s/the tool/React

s/the tool/low code

s/the tool/LLMs


> AI can’t write its own prompts.

What makes you say that? One model can write the prompts of another, and we have seen approaches combining multiple models, and models that can evaluate the result of a prompt and retry with a different one.

> No improvements to AI will let it read vague speakers’ minds. No improvement to AI will let it get answers it needs if people don’t know how to answer the necessary questions.

No, but it can certainly produce output until the human decides it's acceptable. Humans don't need to give precise guidance, or answer technical questions. They just need to judge the output.

I do agree that humans currently still need to be in the loop as a primary data source, and validators of the output. But there's no theoretical reason AI, or a combination of AIs, couldn't do this in the future. Especially once we move from text as the primary I/O mechanism.


I agree with your point, just want to point out that models have been trained on AI generated prompts as synthetic data.


I predict the rate in progress in LLMS will diminish over time, whereas the difficulty of an LLM writing an accurate computer program will go up exponentially with complexity and size. Is an LLM ever going to be able to do what say, Linus Torvalds did? Heck I've seen much less sophisticated software projects than that which it's hard to imagine an LLM doing.

On the lower end, while Joe Average is going to be able to solve a lot of problems with an LLM, I expect more bugs will exist than ever before because more software will be written, and that might end up not being all that terrible for software developers.


I don't know about the rest of the developers in the world but my dream come true would be a computer that can write all the code for me. I have piles of notebooks and files absolutely stuffed with ideas I'd like to try out but being a single, measly human programmer I can only work on one at a time and it takes a long time to see each through. If I could get a prototype in 30 seconds that I could play with and then have the machine iterate on it if it showed promise I could ship a dozen projects a month. It's like Frank Zappa said "So many books, so little time."


If that will be the case then in a finite and small amount of time all your ideas will already have a wide range of implementations/variations because everybody will do the same as you.

It is like now LLMs are on the way to take over (or destroy) content on the web and will take over posts on social media thus making anyone create anything so fast that the incentive to put manual labor into a piece of content is becoming irrelevant in some ways. You work days to write a blog post and publish it and in the same time 1000s of blog posts are published along with yours fighting for the attention of the same audience. who might just stop reading completely because of so much similar things.


Honestly, I don't care if other people are creating similar things to what I am. Actually, I prefer if there are more people working on the same things because it means there are other people that I can talk to about those things and collaborate with. Even if I don't want to work with others on a project I'm not discouraged by other implementations existing, there's always something that I would want to do differently from what's out there. That's the whole point of building my own things after all; if I were happy with using whatever bog standard app I could find on the web then why would I need to build it? It isn't just about making it my own either, it's also about the fun of diving into the guts of a system and seeing how things work, having a machine capable of producing that code gives me the fantastic ability to choose only the parts I'm interested in to do a deep dive on while skipping the boring stuff I've done 1000x times before and I don't have to type the code if I don't want to, I can just talk to the computer about the implementation details and explore various options with it. That in itself is worth the time spent on it but the awesome side effect is you get a new toy to play with too.


This makes sense to me. When updating anything beyond a small project, keeping things reliable and mantainable for the future is 10x more important than just solving the immediate bug or feature. Short term hacks add up and become overwhelming at some point, even if each individual change seems manageable at the time.

I used to work as a solo contractor on small/early projects. The most common job opportunity I encountered was someone who had hired the cheapest offshore devs they could find, seen good early progress with demos and POCs, but over time things kept slowing down and eventually went off the rails. The codebases were invariably a mess of hacks and spaghetti code.

I think the best historical comp to LLMs is offshore outsourcing, except without the side effect of lifting millions of people out of poverty in the third world.


A large amount of the problems in the world don't require computer programs over a few hundred lines long to solve, so LLMs will still see use by DIY types.

People may underestimate how difficult it is for an LLM to write a long or complex computer program though. It makes sense LLMs do very well at pumping out boilerplate and leetcode answers or trivial programs, but it doesn't nessecarily track that it would be they would that good at writing complex sophisticated and unique custom software. It may in fact be much further away from doing that than a lot of people anticipate, in a self-driving is just around the corner kind of way.


> as the tech matures.

Then it means you can use the matured tech and build in one day a superb service. And improve it the next day.


This is what I've been predicting for over a year: AI-assisted programming will increase demand for programmers.

It may well change how they do their work though, just like spreadsheets did for accountants and compilers did for the earliest generation of hand-code-in-ASM developers. I can imagine a future where we do most of our coding at an even higher level than today and only dive down into the minutia when the AI isn't good enough or we need to fix/optimize something. The same is true for ASM today-- people rarely touch it unless they need to debug a compiler or (more often) to write something extremely optimized or using some CPU-specific feature.

Programming may become more about higher level reasoning than coding lower level algorithms, unless you're doing something really hard or demanding.


A crucial difference to the other examples is this. Compilers and spreadsheets are deterministic and repeatable, and are designed to solve a very specific task correctly.

LLMs, certainly in their current form, aren't.

This doesn't necessarily contradict what you and GP are writing, but it does give a flavor to it that I expect to be important.


Exactly this.

When was the last time you wrote assembler?


People also forgets that coding is formal logic that describe algorithms to computer which is just a machine. And because it’s formal, it’s rigid and not prone to manipulation. Instead of using LLMs you’d better off studying a book and add some snippets to your editor. What I like about Laravel is their extensive use of generators. They know that part of the code will be boilerplate. The nice thing about Common Lisp is that you can make the language itself generate boilerplate.


You start by talking about apples and finish talking about cars.


Could you explain what you mean by that idiom?


A way that I like to describe something like this is that code is long form poetry with two intended audiences: your most literally minded friends (machines), and those friends that need and want a full story told with a recognizable beginning/middle/end (your fellow developers, yourself in the future). LLMs and boilerplate generators (and linters and so many other tools) are great about the mechanics of the poetry forms (keeping you to the right "meter", checking your "rhymes" for you, formatting your whitespace around the poem, naming conventions) but for the most part they can't tell your poem's story for you. That's the abstract thing of allegory and metaphor and simile (data structures and the semantics behind names and the significance of architecture structures, etc) that is likely going to remain the unique skill set of good programmers.


Hard to know without the benefit of hindsight if a productivity improvement is:

1: An ATM machine - which made banks more profitable, so banks opened up more of them and drew people into the bank with the machines then told them insurance and investments.

2: Online banking - which simply obsoleted the need to go to the bank at all.

My inclination that LLMs are the former, not the latter. I think the process of coding is an impediment to software development being financially viable, not job security.


Well said. Assistive technology is great when it helps developers write the “right” code and tremendous destructive to company value otherwise.


This is such a well written, thoughtful, and succinct comment. It is people like you and input like this that make HN such a wonderful place. Had OP (of the comment you responded to) posted this on Twitter or Reddit, they would probably have been flooded with FUD-filled non-sense.

This is what the newcomers need. I've been saying something similar to new Software Engineers over the past couple of years and could never put something in a way you did.

Every single sentence is so insightful and to the point. I love it. Thank you so much for this.


I strongly agree with your points and sentiment as far as the state of machine intelligence remains non-general.

Currently, LLMs summarize, earlier systems classified, and a new system might do some other narrow piece of intelligence. If system is created that thinks and understands and is creative with philosophy and ideas, that is going to be different. I don't know if that is tomorrow or 100 years from now, but that is going to be very different.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: