What I'm complaining about is LLMs becoming so normalized that people are using them for even basic intellectual tasks and trusting them implicitly, and apparently (judging from the downvotes) being defensive about the premise that they should even consider doing otherwise.
I've seen people post results from ChatGPT with confidence only to realize they get another result if they try again a few more times (of course, they never bothered.) It's a statistical model, in which case the math can never be trusted entirely, or else it's just making an API call to a normal math library, in which case what even is the point? Either way, there are already better tools for the job.
Needing "sophisticated plugins" to do basic math accurately should send people running away screaming from these damned things. I'll concede they have utility but we don't need to abandon our minds, hearts and wills to them entirely.
I got downvotes for not showing the calculation, fair enough, my mistake. I understand the frustration there. And yeah, you've got to check ChatGPT's work and ask for clarification to understand what it did and why.
That said, I disagree with the assertions made in your post. ChatGPT is a great tool for saving time in doing simple research and calculations, as long as you double check it. It wasn't a year ago, but it is now.
I've seen people post results from ChatGPT with confidence only to realize they get another result if they try again a few more times (of course, they never bothered.) It's a statistical model, in which case the math can never be trusted entirely, or else it's just making an API call to a normal math library, in which case what even is the point? Either way, there are already better tools for the job.
Needing "sophisticated plugins" to do basic math accurately should send people running away screaming from these damned things. I'll concede they have utility but we don't need to abandon our minds, hearts and wills to them entirely.