Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI is bad at figuring out what to do, but fantastic at actually doing it.

I’ve totally transformed how I write code from writing it to myself to writing detailed instructions and having the AI do it.

It’s so much faster and less cognitively demanding. It frees me up to focus on the business logic or the next change I want to make. Or to go grab a coffee.



I would say from my experience there's a high variability in AI's ability to actually write code unless you're just writing a lot of scripts and basic UI components.


The AI version of that Kent Beck mantra is probably "Make the change tedious but trivial (warning: this may be hard). Then make the AI do the tedious and trivial change."

AI's advantage is that it has infinite stamina, so if your can make your hard problem a marathon of easy problems it becomes doable.


I would say this does not work in any nontrivial way from what I've seen.

Even basic scripts and UI components are fucked up all the time.


It can still fuck it up. And you need to actually read the code. But still a time saver for certain trivial tasks. Like if I'm going to scrape a web page as a cron job I can pretty much just tell it here's the URL, here's the XPath for the elements I want, and it'll take it from there. Read over the few dozen lines of code, run it and we're done in a few minutes.


You have to learn how and where to use it. If you give it bad instructions and inadequate context, it will do a bad job.


This is the ‘you’re holding it wrong’ of LLMs.


What tool can’t you hold wrong?

Literally every tool worth using in software engineering from the IDE to the debugger to the profiler takes practice and skill to use correctly.

Don’t confuse AI with AGI. Treat it like the tool it is.


You might want to ask ChatGPT what that is referencing. Specifically, Steve Jobs telling everyone it was their fault that Apple put the antenna right where people hold their phones and it was their fault they had bad reception.

The issue is really that LLMs are impossible to deterministically control, and no one has any real advice on how to deterministically get what you want from them.


I recognized the reference. I just don’t think it applies here.

The iPhone antenna issue was a design flaw. It’s not reasonable to tell people to hold a phone in a certain way. Most phones are built without a similar flaw.

LLMs are of course nondeterministic. That doesn’t mean they can’t be useful tools. And there isn’t a clear solution similar to how there was a clear solution to the iPhone problem.


"Antennagate" is the gift that keeps on giving. Great litmus test for pundits and online peanut galleries.

You are technically correct: it was a design flaw.

But folks usually incorrectly trot it out as an example of a manufacturer who arrogantly blamed users for a major product flaw.

The reality is that that essentially nobody experienced this issue in real life. The problem disappeared as long as you used a cell phone case. Which is how 99.99% of people use their phones. To experience the issue in real life you had to use the phone "naked", hold it a certain way, and have slightly spotty reception to begin with.

So when people incorrectly trot this one out I can just ignore the rest of what they're saying...


Humans are not deterministic.

Ironically AI models are iirc.

Come up with a real argument.


YOU come up with a real argument first

"no, you're wrong" is not a valid rebuttal lolol


I pointed out the claim is both irrelevant and factually wrong.

That should be sufficient rebuttal.


Exactly. The problems start when people say it's good for everything ;)

"All the time"?

This always feels like you're just holding it wrong and blaming the tool.


> AI is bad at figuring out what to do, but fantastic at actually doing it.

AI is so smart, one day might even figure out how to subtract... https://news.ycombinator.com/item?id=45821635


When you need to take the square root of 37282613, do you do it in your head or pull out the calculator?

Why does the AI have to be good at math when it can just use a calculator? AI tool usage is getting better all the time.


I think people generally think AI should be good at math because it runs on a very complex and very fast calculator to begin with.


Yore brain runs on physics and biology, yet here we are…


That is not the point...Its about not understanding subtraction...


> AI is bad at figuring out what to do, but fantastic at actually doing it.

I've found AI is pretty good at figuring out what to do, but hit or miss at actually doing it.


I think it is like protein folding.

It will make a mess but if you drop a console.log into the browser debug console to show the AI what it should be looking for after it spent 3 hours failing to help understand and debug the problem, it will do 1 week of work in 2 hours.


I've noticed this too. The latest cursor version has a @browser command which will launch a browser with playwright, and call tools to inspect the html and inject JavaScript to debug in real-time.

When it has access to the right tools, it does a decent job, especially for fixing CSS issues.

But when it can't see the artifacts it's debugging, it starts guessing, confident that it knows the root cause.

A recent example: I was building up a html element out of the DOM and exporting to PNG using html2canvas. The element was being rendered correctly in the DOM, but the exported image was incorrect, and it spent 2 hours spinning it's wheels and repeating the same fixes over and over.


We're all architects now...


Username checks out.


Couldn't help your self, could you.


Lay off him, he did the right thing




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: