Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A new threat: Being replaced by someone who knows AI (wsj.com)
41 points by zerosizedweasle 1 day ago | hide | past | favorite | 44 comments




You didn't have to punish athletes to make them wear Nike and Adidas shoes, because they were obviously better than plain sneakers. You didn't have to punish graphic artists to make them use tablets because they are so convenient for digital art. But a lot of bosses are convinced that if their staff don't find these tools useful for their tasks, its the line workers who are wrong.

People wouldn't keep using old shoes, and I am old enough to remember graphic artists who wouldn't use computers. It takes time. At some point, it will be a no-brainer. Yet, it will not be simply because method A is so much better than method B. It will be because people using method B change, retire, or are fired.

Sure. There are however probably also plenty of examples where the opposite is true (people being hesitant to use newer better technologies) like not everyone wanting to use computers early on ("the old lady in accounting" etc), people not trusting new medications, people being slow in adopting tractors, people being afraid of electricity (yes!) etc. Change is hard, and people generally don't really want to change. Makes it even harder if you fear (which ~25% of people do, depending on where you are in the world) that AI can take your job (or a large part of it) in the future

I use AI and it makes me a lot more productive. I have coworkers who don’t use AI, and are still productive and valued. I also have coworkers who use AI and are useless. Using AI use as a criteria to do layoffs seems dumb, unless you have no other way to measure productivity

AI helps most for low-value tasks as well. The real valuable problems are the ones that can’t be solved easily, and AI is usually much less help with those problems (e.g., system design, kernel optimisation, making business decisions). I’ve seen many people say how AI helps them complete more low-value tasks in less time, which is great but not as meaningful as other work that AI is not that good at yet.

You have to get quite sophisticated to use AI for most higher-value tasks, and the ROI is much less clear than for just helping you write boilerplate. For example, using AI to help optimise GPU kernels by having it try lots of options autonomously is interesting to me, but not trivial to actually implement. Copilot is not gonna cut it.


If something is really clearly better, people come around. Some people never will but their children and apprentices adopt the new ways. A whole community of practice experimenting is very powerful. Everyone does not move at once, but people on this site know how often the cool new thing turns out to be a time bomb.

On the other hand, if you have ever been in corporate, you could notice, that some people absolutely refuse to learn how to use Excel. I.e. just simple column filters are beyond capacity of most of Excel users.

For some reason, big companies often tolerate people being horribly inefficient doing their job. Maybe it is starting to change?


If people found this useful for putting out "good" work instead of slop they would use it. I promise you that it's the employees who are right, the output is the same AI slop we see everywhere. If you want to turn your company into an AI slop farm that is questionable logic.

The more I interact with these the less I’m afraid these tools will make life meaningless. (Can’t speak on art generation tools. Those still depress me.) It doesn’t matter what you’re making there are still a lot of hard parts even with the best versions of these tools. I doubt a good software developer can be replaced totally unless these get way better.

The best use cases are for code that’s clearly not an end product. You can just try way more ideas and get a sense of which are likely to pan out. That is tremendously valuable. When I start reading the code they produce, I quickly find many ways I would have written it differently though.


Ultimatum? Fire away. Don't threaten me with a better time.

Haven't RTFA (paywall) but an anecdote:

I know a startup founder whose company is going through a bit of a struggle - they hired too many engineers, they haven't gotten product-market fit yet, and they are down to <1 year of runway.

The founder needed to do a layoff (which sucks in every dimension) and made the decision to go all-in on AI-assisted coding. He basically said "if you're not willing to go along, we're going to have to let you go." Many engineers refused and left, and the ones that stayed are committed to giving it a shot with Claude, Codex, etc.

Their runway is now doubled (2 years), they've got a smaller team, and they're going to see if they can throw enough experiments at the wall over the next 18 months to find product-market fit.

If they fail, it's going to be another "bad CEO thought AI could fix his company's problems" story.

But if they succeed....

(Curious what you all would have done in this situation btw...!)


For the people who refused, why?

Not meaning to sound accusatory, just asking. Was it the tools provided that they didn’t like? Ideological reasons not to use AI? Was the CEO being too prescriptive with their day to day?

I guess I find it hard to imagine why someone would dig in so much on this issue that they’d leave a job because of it, but 1) I don’t know the specifics of that situation and 2) I like using AI tooling at work for stuff.


You ask a great question. My sense is that the engineers fell into three camps (as they do here on HN as well):

1) I don’t really like these AI tools I write better code anyway and they just slow me down

2) I like these tools they make me 10% faster but they’re more like spell check / autocomplete for me than life-changing and I don’t want to go all in on agentic coding, etc and I still want to hand write everything, and:

3) I am no longer writing code, I am using AI tools (often in parallel) to write code and I am acting like an engineering manager / PM instead of an IC.

For better or for worse, and there is much to debate about this, I think he wanted just the (3) folks and a handful of (2) folks to try and salvage things otherwise it wasn’t worth the burn :(


Personally I might choose to leave too. I just don't feel like taking responsibility of something iterated with AI. Something I will take the blame when it goes wrong.

This especially so after I have seen someone trying to use AI after I had provided simple and clear manual steps. Instead trying to do something different with very unfitting scenario. Where also the AI really did not understand that the solution would not have even worked.


It would be easier to use AI at work if it would work.

I have a prompt which opens scans of checks placed on a matching invoice (EDIT: Note that the account line is covered when the scan is made so as to preclude any Personal Identifying Information being in the scan) and writes a one line move command to rename the file to include the amount of the check and date, and the invoice ID# and various other information, allowing it to be used to track that the check was entered/deposited and copying a folder full of files as their filepath so that the text of that can be pasted into Notepad, find-replaced to convert the filenames into tab-separated text, then pasted into Excel to total up to check against the adding machine tape (and to check overall deposits).

On Monday, it worked to drag multiple files into Co-Pilot and run the prompt --- on Tuesday, Co-Pilot was updated so that processing multiple files was the bailiwick of "Co-Pilot Pages Mode", so it's necessary to get into that after launching it, requiring a prompt, then pressing a button, then only 20 files at a time can be processed --- even though the prompt removes the files after processing, it only allows running a couple of batches, so for reliability, I've found it necessary to quit after each batch and re-start. However, that only works five or six times, after that, Co-Pilot quits allowing files to upload and generates an error when one tries --- until it resets the next day and a few more can be processed.

I've been trying various LLM front-ends, but Jan.ai only has this on their roadmap for v0.8, and the other two I tried didn't pan out --- anyone have an LLM which will work for processing multiple files?


You're sending people's bank account numbers to Microsoft?

No, the checks, when scanned have a pen placed over the account line so that there is no personal identifying information (should have mentioned that).

This wouldn't be so absurd if AI worked in even trivial cases.

Can you be more specific? What are the trivial cases you’re talking about? AI just doesn’t work? Coding agents are not saving anyone any time?

don’t bother with these questions, same people will say excel can’t get anything done and it sucks :) people that know and (more importantly) take time to learn are doing amazing sh*t with it

It's not that it doesn't have some use cases that "work", it's that a lot of the output is at "AI slop quality" It's more work to turn it into something good than start from scratch. Look at all those lawyers and judges submitting stuff that has laughable citations on non-existent cases.

Sure but OP said that it doesn't even work in trivial cases.

Most of the anti-AI people have conceded it sometimes works but they still say it is unreliable or has other problems (copyright etc). However there are still a few that say it doesn't work at all.


If something isn't reliable, I don't think it works at all. I'm trying to work, not play a slot machine.

Are all the tools you use 100% reliable?

Cause I use things like computers, applications, search engines and websites that regularly return the wrong result or fail


I’m not really sure how you envision AI use at your job but AI can be the extremely imperfect tool it is now and also be extremely useful. What part of AI use to you feels like a slot machine?

damn! with this attitude I’d be left using abacus…

It just totally is different from my own personal experience which leads me to believe people just are lamenting poor usage of AI tools which is very understandable.

But nuanced and effective AI use, even today with current models, is incredible for productivity in my experience


I expect it makes a big difference what kind of work one does. For me, working with a legacy codebase for firmware, with 1000s of lines of C in each module, AI is very slow (~5-10s response time) and almost none of the code is acceptable.

I do however find it useful for getting an overview of dense chunks of confusing code.


I’ve been hacking since the ‘90’s, it is the most remarkable productivity boost we’ve ever had. I feel awful for people that don’t take time to learn…

Intellij guesses the functions I want to write plenty. I don't think it's useful to try to use AI for complex or nuanced needs (although it gets close in middling cases). I think it's useful enough.

Disagree. Coding--I've heard enough bad things I'm not interested in trying it. However, I recently ran into a use case where it's good: drawing illustrations for articles. Thus you can't say it never works.

It depends on the coding domain, but you're basically saying you've never tried it but you're certain Mitchell Hashimoto, Salvatore Sanfilippo, Armin Ronacher or Simon Willison, all supremely accomplished coders, must be misguided when they explain how it's made even them more productive.

They've totally bought into the most extreme AI hype if this is happening. Altman convinced them AI is a PhD in your pocket and their lazy employees are costing them money by not using it.

I wish, where I’m at we had to agree not to use it without “disclosure”, not even sure what that means. Oh but also we agree to do code reviews, and since we would review the code regardless of how it was written I don’t know what the concern is about… notably there was never anything written about not using code generation tools which have existed for many decades… anyways I just use AI anyways but it would of course be better if work would fund it!

Accenture of course can make more money by first delivering vibe slop and then have a second round of contracts that fix the slop. Customers beware.

Consulting firms are going to be hard hit by AI. [1] It will make them more efficient, but so many potential clients will be able to gather the needed data, crunch the numbers, and write up analyses by themselves. And if they do still use outside consultants, they'll expect the prices to go down since they know that an army of junior consultants won't be necessary to build all the models.

1: https://www.ft.com/content/68011c4a-8add-4ac5-b30b-b4127aee4...


A new threat: your company taking a huge shit because of AI

Kinda wonder what the extent of this is. You can get some really great results from your employees by mandating shit like this. /s

I use AI daily and frankly I love it while thinking of it from the context of "I write some rough instructions and it can autocomplete an idea for me to an extremely great degree". AI literally types faster than me and is my new typewriter.

However, if I had to use it for every little thing, I'd do it. The problem though is when it reaches a point where I have to use it to replace critical thinking for something I really don't know yet.

The problem here is that these LLMs can and will churn out absolute trash. If this was done under mandate, the only thing I'd be able to respond with when that trash is being questioned is "the AI did it" and "idk, I was using AI like I was told".

It literally falls into the "above my pay-grade" category when it comes down as a mandate.

I really hope there's more nuance to articles like these though. I really hope these companies mandating AI use are doing so in a way that considers the limitations.

This article does not really clue me the reader in to if that is the case or not though.


What’s the controversy, unless people are straw manning or pulling from some bad personal experience?

If you are not leveraging the best existing tools for your job (and understanding their limitations) then your output will be lower than it should be and company leadership should care about that.

Claude reduces my delivery time at my job like 50%, not to mention things that get done that would never have been attempted before. LLMs do an excellent job seeding literature reviews and summarizing papers. Would be a pretty bad move for someone in my position to not use AI, and would be pretty unreasonable of leadership not to recognize this.


Crazy idea: Evaluate me based on my output and not which tools I use. If AI is the killer productivity boost you claim, then I'll have no choice in order to keep up.

I think that’s perfectly fair.

However, if you were leadership in this scenario, and you see people using various AI tools are systematically more productive then the people that aren’t, what would you do?


Ask questions instead of making demands. Presumably you hired your engineers because they're smart. If you hired dumb engineers then you have a much bigger problem than a lack of AI utilization.

at what point do you actually not know anything?

What do you mean?



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: