Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am hobbyist playing around. Recently dropped CC (which gave me a sense of awe 2 months ago), but they realized GPUs need CapEx and I want to screw around with pi.dev on a budget. Then on to GH Copilot but couldn't understand their cost structure, ran out of quota half month in, now on Codex. I don't really see any difference for little stuff. I also have Antigravity through a personal Gmail account with access to Opus et al and I don't understand if I am paying for it or not. They don't have my CC so that's a breather.

It's all romantic, but a bunch of devs are getting canned left and right, a slice of the population whose disposable income the economy depends on.

It's too late to be a contrarian pundit, but what's been done besides uncovering some 0-days? The correction will be brutal, worse than the Industrial Revolution. Just the recent news about Meta cuts, SalesForce, Snap, Block, the list is long.

Have you shipped anything commercially viable because of AI or are you/we just keeping up?



> The correction will be brutal, worse than the Industrial Revolution.

Has it occurred to you that there might not be a correction, and that the outcome would still be brutal, at least on par with the industrial revolution.


It won't get that far.

It's physically impossible to build out the datacenters required for the "AI is actually good and we have mass layoffs" scenario. This Anthropic investment is spurred on because they've already hit a brick wall with capacity.

$40B goes a long way, but not for datacenters where nearly every single component and service is now backordered. Even if you could build the DC, the power connection won't be there.

The current oil crisis just makes all of that even worse.


Doesn't that just draw out the AI revolution by a few years? I don't see why it would stop anything though.

Imagine a scenario where someone claimed that it was physically impossible to replace all the buggies with automobiles because everything was backordered and there were labor shortages. Surely the replacement still happens eventually though?


A drawn out long change simply doesn't have the major societal upset that imminent mass-unemployment has.

With how much scale AI datacenters want and how the Trump administration has made supply problems significantly worse, we'd be talking decades, plural.


I don't think lowering the rate a bit is going to be sufficient to avoid major upsets. If (arbitrary example) every software developer were forced to switch jobs over a 10 year period that would still be an extremely disruptive sequence of events. And I don't think there's any scenario in which software developers are widely impacted but other industries somehow aren't.

Digitization was already fairly disruptive and that involved much smaller changes than what we appear to be facing while also taking place over something like 30 years or more.


We pretty much already had the layoffs, at least that's my perception.

The next level of layoffs is probably still 25 years out.


There's layoffs, certainly.

But all the economic indicators suggest those are "bad economy" layoffs dressed up as "AI" layoffs to keep the shareholders happy.


The real “AI layoffs” are all the people that are PIPed because their colleagues are better at leveraging AI.


We must have a very different view of the world because in my neck of the woods companies are desperate for senior talent. And it's become even harder to find seniors now that everyone has access to a machine that can create the appearance of experience.


> The next level of layoffs is probably still 25 years out.

Hasn't even been 25 years years since the previous layoffs before the current ones.


Do you mean as in there will be no happy ending / reset and no another century of prosperity?


I mean as in living through the industrial revolution would have been wild. So whether we have an AI revolution or an AI bubble it's bound to be a roller coaster.

And that's without accounting for the various wars (and resultant economic impacts) that are already in progress. A large part of what drove the meat grinder of WWI was (very approximately) the various actors repeatedly misjudging the overall situation and being overly enthusiastic to try out their shiny new weapons systems. If one or more superpowers decide to have a showdown the only thing that might minimize loss of life this time around is (ironically enough) the rise of autonomous weapons systems. Even in that case as we know from WWII the logical outcome is a decimated economy and manufacturing sector regardless of anything else that might happen.


> minimize loss of life this time around is (ironically enough) the rise of autonomous weapons systems

I think that just means the relative civilian loss of life will increase once again.


What strategic merit is there in targeting civilians or life critical infrastructure in a fully automated battlebot scenario? Perhaps it's naive but I would expect stockpiles, datacenters, and any key infrastructure on which the local semiconductor fabrication depends to be the primary targets.


Look au Ukraine for answers and how russians target almost purely civilian infrastructure and civilians in terror campaigns every single day and night, same as nazis did to Britain in WWII. With exactly same results but they just double down and send more drones next day.

russia is really and empire of the dumb and subjugated serfs at this point (again, history repeats), but they are far from only such place.

Dont expect more, most people are not that nice when SHTF.


The current reality doesn't match your expectations. Russia is using automated warfare to strike what are primarily human life-critical targets.


The aim of war is to make political change and gain control of opponent. it is much more valuable to capture datacenters, infra and semiconductor fabricaton than to destroy and rebuild it.


Bubble or revolution - not a dichotomy.

Bubbles like the AI bubble are a game theoretic outcome of a revolution. Many players invest heavily to avoid losing, but as a whole the market over invests. This leads to a bubble.


Imagine you're a typesetter and they just invented computerized printing.


These kinds of comments are so confusing to me, is the work you do day to day really so trivial that you can be wholesale replaced by an LLM?


90% of the actual code writing? Yes. The actually valuable part is coming up with the ideas for what to do.

There isn't going to be a great reset where everyone goes back to coding by hand any more than we're going back to typesetting by hand.


We're not talking about the LLMs of today but whatever shows up 2 years from now and then again 2 years after that. Don't look at the present state of things but instead project the trend line.


There has always been a gap between the experience of solo/small shop developers, vs. developers who work in teams in a large corporate environment. But thanks to open source, we have for the past twenty years at least mostly all been using the same tools.

But right now, the difference in developer experience between a dev on a team at a business which has corporate copilot or Claude licenses and bosses encouraging them to maximize token usage, vs a solo dev experimenting once every few months with a consumer grade chat model is vast.


Let’s take an extreme example.

Meta seemingly has a constant stream of product managers. If llm’s really augment the productivity of engineers, why isn’t meta launching lots more stuff? I mean there’s no harm in at least launching one new thing.

What are all those people doing with the so called productivity enhancements?

What I’m calling into question is how much does generating more code matter if the bottle neck is creativity/imagination for projects?

The only thing I’ve seen is a really crummy meta AI thing implemented within WhatsApp.


It’s allowed a sludge of internal tools to spin up, and more bloat. The ability to sand bag and over build these tools has gotten 2-10x worse.

Only solution I can think of is to drastically cut headcount so productivity is back to prior levels, and profitability is raised. Big Tech is mostly market constrained with not much room to grow beyond the market itself growing.

As for startups, seems like AI tools have drastically reduced their time to market and accelerated their growth curves.


Im convinced the most scarce skill on the planet is the ability to a) envision something that needs to exist in the world b) explain how the thing creates value from a financial perspective.

Most people tend to think they know what they are talking about (e.g. surface level understanding of how to think economically) and end up making basket-case decisions - only realising it months later. By that point they will fail to admit defeat and keep going on.

"As for startups, seems like AI tools have drastically reduced their time to market and accelerated their growth curves."

You mean like openclaw? lol


In a word, bottlenecks moved.

What I see in my backyard: coding now takes significantly less time, but its just coding. Before one gets to building there are squabbles between business and product people. Testing takes just as much as it used to. Since nice to haves are easy to add and product people begin to take it for granted, the product cycles don't get shorter.

Give it time. Right now its just coding, but procedural AI will come after product development, architecture, and then whatever is left of management.


Absolute delusion.

The best people can not only envision products but also possess great judgement without needing data. For AI to even come close it would need an insane amount of data that is nuanced and subtle - by the the time the AI has obtained all the necessary data and made sense of it the human is long gone working on something else.


But these people will age out and juniors do not get hired. “Good judgment comes from experience, and experience comes from bad judgment.” and all that.

Is LLM going to invent its own languages that no average programmer will understand? As in "I don't need your C++ human, I will rewrite your fart app in ClaudASM and you will like it". These are naive questions, but I can't visualize how all of this will unfold.


Forgive my ignorance, but what exactly is the vast difference? Who's doing more of what, or whatever you're implying? And how do you quantify this?


The people who use AI to the maximum learn more.

A neutral hobbyist on a $20 budget will build something and immediately bump into quotas. Its not going to be an enjoyable experience.

A negatively predisposed pro who only dabbles in AI gets to the first disappointment, smiles, and thinks "yeah, about what i expected" and quits.

To learn those new tools one needs to not be stingy. Invest as much as needed into tokens, subscriptions, and maybe most importantly invest the time. Spend time building various things. Try out various models not just for coding, but as part of apps being built. For bonus points, meaningfully experiment with local models. I try to avoid discussions with sceptics who have not put at least a few months of effort into learning those tools. It's like discussing driving with my mother in law, who spent maybe 20hrs behind the wheel through her whole life (and is very, very opinionated!).


And it's not a waste of money because?


You'd have learned something new. Useful, not useful, thorough understanding of a new thing is rewarding.

Also, its not primarily about money - the real investment here is time.


In my opinion it's a complete waste of time and money to learn something that is gated by a company that might disappear tomorrow.

It's akin to company courses to learn something that is specific to that company. Of course you do them on the job, there is no point in doing them if you don't work there.

Similarly what's the point of trying 300 different models if any job will decide for you which one they approve the use of, and you are liable to get fired and asked for damages if you let anything else access company intellectual property?


The difference is (if you'll forgive me recruiting a couple of straw men for the purpose of illustrating the spectrum we are talking about here):

Hobbyist solo dev, counting tokens, hitting quotas, trying things on little projects, giving up and not seeing what the fuss is about.

vs

Corporate developer, increasingly held accountable by their boss for hitting metrics for token usage; being handed every new model as soon as it comes out; working with the tools every day on code changes that impact other developers on other teams all of whom have access to those same tools.


Okay, so just to be clear you're not commenting on productivity? Or what does "changes that impact" mean?

I might be missing a lot of self-evident assumptions here but I feel like I'm still missing so much context and have no idea what this difference is actually describing.


If you have some objective measure of productivity in mind, feel free to share it, but no that's not what I'm commenting on.

I'm talking more about why threads like this seem to be full of people saying 'this has completely changed how corporate development works' and other people saying 'I tried it a few times and I don't get the hype'


Developers being let go is about the economy. Every time we see a slowdown people are let go and we always blame the fad but it's the economy not whatever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: