Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would say from my experience there's a high variability in AI's ability to actually write code unless you're just writing a lot of scripts and basic UI components.




The AI version of that Kent Beck mantra is probably "Make the change tedious but trivial (warning: this may be hard). Then make the AI do the tedious and trivial change."

AI's advantage is that it has infinite stamina, so if your can make your hard problem a marathon of easy problems it becomes doable.


I would say this does not work in any nontrivial way from what I've seen.

Even basic scripts and UI components are fucked up all the time.


It can still fuck it up. And you need to actually read the code. But still a time saver for certain trivial tasks. Like if I'm going to scrape a web page as a cron job I can pretty much just tell it here's the URL, here's the XPath for the elements I want, and it'll take it from there. Read over the few dozen lines of code, run it and we're done in a few minutes.

You have to learn how and where to use it. If you give it bad instructions and inadequate context, it will do a bad job.

This is the ‘you’re holding it wrong’ of LLMs.

What tool can’t you hold wrong?

Literally every tool worth using in software engineering from the IDE to the debugger to the profiler takes practice and skill to use correctly.

Don’t confuse AI with AGI. Treat it like the tool it is.


You might want to ask ChatGPT what that is referencing. Specifically, Steve Jobs telling everyone it was their fault that Apple put the antenna right where people hold their phones and it was their fault they had bad reception.

The issue is really that LLMs are impossible to deterministically control, and no one has any real advice on how to deterministically get what you want from them.


I recognized the reference. I just don’t think it applies here.

The iPhone antenna issue was a design flaw. It’s not reasonable to tell people to hold a phone in a certain way. Most phones are built without a similar flaw.

LLMs are of course nondeterministic. That doesn’t mean they can’t be useful tools. And there isn’t a clear solution similar to how there was a clear solution to the iPhone problem.


"Antennagate" is the gift that keeps on giving. Great litmus test for pundits and online peanut galleries.

You are technically correct: it was a design flaw.

But folks usually incorrectly trot it out as an example of a manufacturer who arrogantly blamed users for a major product flaw.

The reality is that that essentially nobody experienced this issue in real life. The problem disappeared as long as you used a cell phone case. Which is how 99.99% of people use their phones. To experience the issue in real life you had to use the phone "naked", hold it a certain way, and have slightly spotty reception to begin with.

So when people incorrectly trot this one out I can just ignore the rest of what they're saying...


Humans are not deterministic.

Ironically AI models are iirc.

Come up with a real argument.


YOU come up with a real argument first

"no, you're wrong" is not a valid rebuttal lolol


I pointed out the claim is both irrelevant and factually wrong.

That should be sufficient rebuttal.


Exactly. The problems start when people say it's good for everything ;)

"All the time"?

This always feels like you're just holding it wrong and blaming the tool.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: