I don't think you understand the value proposition of chatgpt today.
For context, I'm an expert too. And I had the same experience as you. When I asked it questions about my area of expertise, it gave me a lot of vague, mutually contradictory, nonsensical and wrong answers.
The way I see it, ChatGPT is currently a B+ student at basically everything. It has broad knowledge of everything, but its missing deep knowledge.
There are two aspects to that to think about: First, its only a B+ student. Its not an expert. It doesn't know as much about family law as a family lawyer. It doesn't know as much about cardiology as a cardiologist. It doesn't know as much about the rust borrow checker as I do.
So LLMs can't (yet) replace senior engineers, specialist doctors, lawyers or 5 star chefs. When I get sick, I go to the doctor.
But its also a B+ student at everything. It doesn't have depth, but it has more breadth of knowledge than any human who has ever lived. It knows more about cooking than I do. I asked it how to make crepes and the recipe it gave me was fantastic. It knows more about australian tax law than I do. It knows more about the american civil war than I do. It knows better than I do what kind of motor oil to buy for my car. Or the norms and taboos in posh british society.
For this kind of thing, I don't need an expert. And lots of questions I have in life - maybe most questions - are like that!
I brainstormed some software design with chatgpt voice mode the other day. I didn't need it to be an expert. I needed it to understand what I was saying and offer alternatives and make suggestions. It did great at that. The expert (me) was already in the room. But I don't have encyclopedic knowledge of every single popular library in cargo. ChatGPT can provide that. After talking for awhile, I asked it to write example code using some popular rust crates to solve the problem we'd been talking about. I didn't use any of its code directly, but that saved me a massive amount of time getting started with my project.
You're right in a way. If you're thinking of chatgpt as an all knowing expert, it certainly won't deliver that (at least not today). But the mistake is thinking its useless as a result of its lack of expertise. There's thousands and thousands of tasks where "broad knowledge, available in your pocket" is valuable already.
If you can't think of ways to take advantage of what it already delivers, well, pity for you.
But just now had a fairly frequent failure mode: I asked it a question and it gave me a super detailed and complicated solution that a) didn’t work, and b) required serious refactoring and rewriting.
Went to Google, found a stack overflow answer and turns out I needed to change a single line of code, which was my suspicion all along.
Claude was the same, confidentially telling me to rewrite a huge chunk of code when a single line was all that was needed.
In general Claude wants you to write a ton of unnecessary code, ChatGPT isn’t as bad, but neither writes great code.
The moral of the story is I knew the gpt/claude solutions didn’t smell right which is why I tried Google. If I didn’t have a nose for bad code smells I’d have done a lot of utterly stupid things, screwed up my code base, and still not have solved my oroblwm.
At the end of the day I do use LLM, but I’m experienced so it’s a lot safer than a non-experienced person. That’s the underlying problem.
My point is that even now, you're only talking about using chatgpt / claude to help you do the thing you already know how to do (programming). You're right of course. Its not currently as good at programming as you are.
But so what? The benefit these chat bots provide is that they can lend expertise for "easy", common things that we happen to be untrained at. And inevitably, thats most things!
Like, ChatGPT is a better chef than I am. And a better diplomat. A better science fiction writer. A better vet. And so on. Its better at almost every field you could name.
Instead of taking advantage of the fields where it knows more than you, you're criticising it for being worse than you at your one special area (programming). No duh. Thats not how it provides the most value.
Sorry my point isn’t clear: the risk is you are being confidently led astray in ways you may not understand.
It’s like false memories of events that never occurred, but false knowledge - you think you have learned something, but a non-trivial percent of it, that you have no way of knowing, is flat out wrong.
It’s not a “helpful B+ student” for most people , it’s a teacher, and people are learning from it. But they are learning subtly wrong things, all day, every day.
Over time, the mind becomes polluted with plausible fictions across all types of subjects.
The internet is best when it spreads knowledge, but I think something else is happening here, and I think it’s quite dangerous.
Ah thankyou for clarifying. Yes, I agree with this. Maybe, its like a B+ student confidently teaching the world what it knows.
The news has an equivalent: The Gell-Mann amnesia effect, where people read a newspaper article on a topic they're an expert on and realise the journalists are idiots. Then suddenly forget they're idiots when they read the next article outside their expertise!
So yes, I agree that its important to bear in mind that chatgpt will sometimes be confidently wrong.
But I counter with: usually, remarkably, it doesn't matter. The crepe recipe it gave produced delicious crepes. If it was a bad recipe I would have figured that out with my mouth pretty quickly. I asked it to brainstorm weird quirks for D&D characters to have, some of the ideas it came up with were fabulous. For a question like that, there isn't really such a thing as right and wrong anyway. I was writing rust code, and it clearly doesn't really understand borrowing. Some code it gives just doesn't compile.
I'll let you in on a secret: I couldn't remember the name of the gell-mann amnesia effect when I went to write this comment. A few minutes ago I asked chatgpt what it was called. But I googled it after chatgpt told me what it was called to make sure it got it right so I wouldn't look like an idiot.
I claim most questions I have in life are like that.
But there are certainly times when (1) its difficult to know if an answer is correct or not and (2) believing an incorrect answer has large, negative consequences. For example, Computer security. Building rocket ships. Research papers. Civil engineering. Law. Medicine. I really hope people aren't taking chatgpt's answers in those fields too seriously.
But for almost everything else, it simply doesn't matter that chatgpt is occasionally confidently wrong.
For example, if I ask it to write an email for me, I can proofread the email before sending it. The other day asked it for scene suggestions in improv, and the suggestions were cheesy and bad. So I asked it again for better ones (less chessy this time). I ask for CSS and the CSS doesn't quite work? I complain at it and it tries again. And so on. This is what chatgpt is good for today. It is insanely useful.
The problem, at least for me, is that I feel like the product offerings suggested to us in other comments (not Claude/ChatGPT, but the third party tools that are supposed to make the models better at code generation) either explicitly or implicitly market themselves as being vastly more capable than they are. Then, when I complain, it’s suggested that the models can’t be blamed (because they’re not experts) and that I’m using the tools incorrectly or have set my expectations too high.
It’s never the product or its marketing that’s at fault; only my own.
In my experience, the value proposition for ChatGPT lies in its ability to generate human language at a B+ level for the purposes of a an interactive conversation; its ability to generate non-trivial code has proven to be terribly disappointing.
For context, I'm an expert too. And I had the same experience as you. When I asked it questions about my area of expertise, it gave me a lot of vague, mutually contradictory, nonsensical and wrong answers.
The way I see it, ChatGPT is currently a B+ student at basically everything. It has broad knowledge of everything, but its missing deep knowledge.
There are two aspects to that to think about: First, its only a B+ student. Its not an expert. It doesn't know as much about family law as a family lawyer. It doesn't know as much about cardiology as a cardiologist. It doesn't know as much about the rust borrow checker as I do.
So LLMs can't (yet) replace senior engineers, specialist doctors, lawyers or 5 star chefs. When I get sick, I go to the doctor.
But its also a B+ student at everything. It doesn't have depth, but it has more breadth of knowledge than any human who has ever lived. It knows more about cooking than I do. I asked it how to make crepes and the recipe it gave me was fantastic. It knows more about australian tax law than I do. It knows more about the american civil war than I do. It knows better than I do what kind of motor oil to buy for my car. Or the norms and taboos in posh british society.
For this kind of thing, I don't need an expert. And lots of questions I have in life - maybe most questions - are like that!
I brainstormed some software design with chatgpt voice mode the other day. I didn't need it to be an expert. I needed it to understand what I was saying and offer alternatives and make suggestions. It did great at that. The expert (me) was already in the room. But I don't have encyclopedic knowledge of every single popular library in cargo. ChatGPT can provide that. After talking for awhile, I asked it to write example code using some popular rust crates to solve the problem we'd been talking about. I didn't use any of its code directly, but that saved me a massive amount of time getting started with my project.
You're right in a way. If you're thinking of chatgpt as an all knowing expert, it certainly won't deliver that (at least not today). But the mistake is thinking its useless as a result of its lack of expertise. There's thousands and thousands of tasks where "broad knowledge, available in your pocket" is valuable already.
If you can't think of ways to take advantage of what it already delivers, well, pity for you.