Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ChatGPT has zones of competence and your opinion of ChatGPT is likely to be a function of whether or not its competence zones overlap with what you are doing.

Early on, ChatGPT knocked a bunch of highly technical questions I sent it out of the park. It trivially reproduced insight from hours of diving through old forums and gave me further info that eventually proved out. More recently, it has completely hallucinated responses to the last 3 technical questions I gave it in its familiar "invent an API that would be convenient" style. It's the same ChatGPT, but two very different experiences as a function of subject.



> Early on, ChatGPT knocked a bunch of highly technical questions I sent it out of the park

I hear this all the time but never with a transcript. I wonder how much experts “read into” responses to make them seem more impressive. (Accidentally you understand, no malice). Or if in the moment it feels impressive but on review it’s mostly banal / platitudes / vague.

The few times I’ve used it for precise answers they were wrong in subtle but significant ways.


Ask it about cleaning a toilet, and then go deep on bacteria and fungal growth etc. For areas in which you have no expertise, but have a simple understanding, it will grow that knowledge tree.

My apologies if you are an expert toilet cleaner, the point is it's more useful than how-to-wiki or YouTube for getting you up and running or refreshing info you may have forgotten.

Avoid asking it for VERY obscure things you know nothing about, because it probably doesn't either, but won't say, and the hallucinations start.

The falsified stuff can be pretty awful, and it has a tendency to double down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: