Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have you ever tried to solve a difficult technical problem with ChatGPT (any version)? I have. It worked about as well as Google search. Which is to say, I had to keep refining my ask after every answer it gave failed until it ultimately told me my task was impossible. Which isn't strictly true, as I could contribute changes to the libraries that weren't able to solve my problems.

Funny enough, the answers gpt4 gave were basically taken wholesale from the first Google result from stackoverflow each time. It's like the return of the I'm Feeling Lucky button.



There are many developers who are unable to do that and need to be spoon-fed. This is the market for ChatGPT and that's why they heavily promote it.

I doubt though that corporations that employ these developers will have any advantage. To the contrary, their code bases will suffer and secrets will leak.


ChatGPT is a godsend since Google search turned to shit.

Ask some question about cpp or c and Google search will provide 5 or 6 ad ridden cesspools before giving link to cppreference.


This is very much my experience too. Occasionally ChatGPT can give me something quickly that I wasn't able to find quickly (because I was looking in the wrong place, likely). But most of the time, it's just a more interactive (and excessively verbose) web search. In fact, search tends to be more optimized for code problems; I can scan results and SO answers much faster than I can scan a generated LLM answer.


Use the edit button if you get an incorrect answer. The whole conversation is fed back in as part of the prompt so leaving incorrect text in there will influence all future answers. Editing creates a new branch allowing you to refine your prompt without using up valuable context space on garbage.


That doesn't change anything in terms of the flow. It's still refining the input over and over. This is exactly how searching on google works as well.

For the case I last tested, there was no correct answer. I asked it to do something that is not currently possible within the programming framework I asked it to use. Many people had tried to solve the problem, so chaptgpt followed the same path as that's what was in its data set and provided solutions that did not actually solve the problem. There wasn't any problem with the prompts, it's the answers that were incorrect. Having those initial prompts influence the results was desired (and usually is, imo).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: