Wow the questioner and most of the top responses are overcomplicating this so much. The best response is the simple truth: "ChatGPT only says what's statistically likely in a given situation, which means it comes up with something plausible-sounding, but not necessarily true. Sometimes it suggests programming tricks that don't exist, for example."
Put it in a text file for pasting repeatedly, and ignore any replies of protest.
The whole forum is basically drama bait. Top questions are "how do I deal with employee who slept with my wife?", "Criticized for doing my job too well" etc. None of these questions are looking for answers but instead for validation or just a good ol' watercooler chat.
Several months ago, I received a report that I believed to be generated by an LLM, rather than original work. So I asked my coworkers on Slack about their expert opinion: do you believe this is original work or not?
The only answer I received was from a guy who copy-pasted it into an "LLM detector" and uncritically copy-pasted the output for me. This response got several thumbs-up from management and mentors.
Geez, if I'd wanted that kind of crap answer I could've obtained it myself, without asking "experts" for their human opinions.
This is the problem with getting people to understand that ChatGPT can't actually think: a lot of people also don't actually think as a matter of habit.
I feel fairly confident at detecting LLM output in the course of my work, but that doesn't mean that I should unilaterally make decisions based on my own opinion without asking other experts who are right there beside me at work. Is there something wrong with asking for a second opinion, when management has provided to us Slack channels to do exactly this?
I mean yeah, I can understand that, except that it was a team channel, and nobody needed to respond at all. So how in the world was I pinning anyone down? And why did a couple senior folks endorse that answer? Were they pinned down as a result?
You weren't trying to pin anyone down, but from the respondent's point of view, any response they endorse pins them down because it opens them up to being wrong and responsible (this depends on the culture at the company and the individual of course).
The conscientious people in the chat will have a hard time ignoring an unanswered query, so answering with the tool's output alone forms a sort of compromise.
This can potentially be defused by first stating what you think and asking for a second opinion, but of course, the easy out there is to simply endorse whatever is already on the table (maybe like your senior folks).
I was only suggesting that the situation could be more nuanced than a coworker implicitly suggesting "just google it for fuck's sake", but for all I know, it wasn't.
Just asking "Does that work?" can demonstrate this. Slightly contrived example (I've trimmed the ChatGPT output down - it gives some warnings about code readability and that it only has knowledge up to 2022 that might be useful if the code worked at all):
> User: how can I pass two parameters to a hugo partial as positional parameters (without using a dict)?
> ChatGPT: If you want to pass parameters as positional arguments without using a dictionary, you might need to work around this limitation. One approach is to use a custom data structure, such as a slice or an array, to emulate positional parameters. Here's an example:
{{/* In your main template */}}
{{ partial "yourPartial.html" "value1" "value2" }}
{{/* In yourPartial.html */}}
{{ $param1 := index . 0 }}
{{ $param2 := index . 1 }}
[ChatGPT has also suggested in other runs that the values can be accessed with {{ .Param1 }} and {{ .Param2 }}]
> User: Does that work?
> ChatGPT: I apologize for the confusion in my previous response. I made an error in the example provided. Hugo's templating system doesn't support passing parameters as positional arguments directly, and my previous example was incorrect.
Is this really your suggestion to someone dealing with a coworker making your work harder to do by gumming up the works with a bad tool? This is cult like behavior.
If the coworkers are convinced ChatGPT is a solid source of truth, showing them that ChatGPT will prove their generated answer wrong sounds more effective and time efficient to me than handwriting a refutal that they won't trust as much as GPT.
Or you can ask ChatGPT for the most insulting comment it can come up with. When they get upset, tell them ChatGPT generated it. Look surprised, like you just had the idea, and say that maybe ChatGPT-generated content in emails isn't actually a good idea.
I’ve been thinking about the future of this type of “AI fatigue”. Microsoft 365 Copilot has embedded this feature in Outlook, Teams, Word, etc.
I’m concerned that people’s desire and ability to read content from coworkers will significantly decrease, because…why spend my life reading something that “Bob” didn’t even write?
I’m not the only one thinking about this. M365 Copilot even has a “Make it sound like me” button…
I still facepalm everytime someone excitedly demonstrates how they used ChatGPT to turn a nice concise message into a wall of PR fluff, as if that's some victory. I get that for some types of writing this is unfortunately beneficial, but for routine transactional emails (especially with internal colleagues), I think 90% of people would much prefer brevity and plain language
If I ever spend a long time composing an email, it's usually because I'm trying to make it as short and clear to read as possible. What's that old quote, "forgive me for the length of this letter; I don't have the time to make it short"?
But you can distill a wall of PR fluff to a concise message with ChatGPT... The problem is the average user thinks "long text = good" and their bosses often agree.
I think this is the main problem that will increasingly come into focus.
OpenAI has one of the fastest-growing software releases in history. It's kind of insane how after a year this is already starting to alter people's communication styles. Technologists will have to start asking "why do people still insist on doing this", even with all the discussion about incorrect information coming up. Well, because they wanted to, they could, and nobody has told them "no" yet.
I hope the consequences of such a large-scale software rollout are considered next time. If you have to stop and stay "we should educate people about the misuses of this," then some proportion of the population won't be educated or reject being educated outright and do what they want. In my view, the only thing preventing people from throwing around ChatGPT suggestions in inappropriate contexts is... not giving them ChatGPT. To remain naive to the possibilities lest they get excited and unwittingly unleash them on everyone, regardless of true their intentions - as had been the status quo up until a few years ago. Draconian legislation outlawing GPUs or LLMs are more divisive a solution than the general public just not being aware of the possibilities of LLMs.
I know this is unrealistic, however. "Time for people to embrace the tech and move on" has become a thought-terminating cliche. As a programmer, I'm lost as to what to say or do about this.
I've seen accusations and suspicion of HN commentators giving ChatGPT responses. Accusing someone of using ChatGPT here has become something more nuanced than an insult, but still sows discord. If it's coming to this, I think it is worth examining for what purpose we build the things we do.
I hate that the top voted answer is tattle to the boss. The poster and artists are both adults, and it is tagged communication.
It seems like the first step should just be ignoring it or communicating the issue to them. Something like setup a meeting to go through a couple of examples of the flaws in extreme detail, that the solutions many times lack the context to come up with a great answer, you appreciate they're trying to help you figure out the solution with ChatGPT but it's not necessary because you also have access to ChatGPT and leverage it when you can.
It's not "tattle to the boss", it's using a manager as a resource to help a team work together more effectively. That's part of what they're paid for. If it was one person, a 1::1 could make sense, but they're getting swamped by teammates interrupting their work, it's not their job to deal with that and delegating it to the person whose job it is makes a lot of sense.
As an employee, before I escalate any issue I always try to do my best to fix it myself first (barring some efficiency gain like it's far easier for someone else to resolve). And as a manager I appreciate reports who do the same.
As a past manager (and getting ready to manage another team now), the last thing I'd want would be for anyone on my team to have to deal with being swamped by inane requests from another team and that's something I'd step in to help with even unasked. As an IC whose time has always been highly valuable my current manager and none of my past managers would ever have put up with a group of people from another team interrupting me and squandering my time like that and would have stepped in as well.
Are they? Adults usually employ critical thinking and his artist colleagues obviously dont. It’s not OP’s task to explain the flaws in openai’s product. Top reply is right - raise it with their manager. If the artists are unhappy they can contact openai’s customer support for details on why the product doesnt work as advertised.
Being an adult is exactly why I'm going to involve the person who gets paid to deal with situations like these instead of following some idiotic notion of "tattling" like I'm in grade school or the mafia.
OP has literally done exactly what you're suggesting, and it hasn't worked. Going to the manager is literally what you're supposed to do next.
> I then have to take the time to explain why ChatGPT's proposed solution wouldn't work, which is tedious and difficult when the other persons do not understand many of the basic ideas involved. They also seem skeptical, and I get the idea they feel I'm incompetent because as I understand it ChatGPT is very useful in their setting, and they have come to believe it to be the ultimate source of knowledge.
He explained why a specific solution wouldn't work, but he did not tie that into a request to stop sending him ChatGPT suggestions. Telling them to stop is a key part of that discussion.
The right approach IMHO is to educate those coworkers on the benefits and weaknesses of ChatGPT. This can only be handled effectively systemic, not on a case by case basis.
Btw, this is not a new problem. Even before LLM's widespread popularity, the same problem occurred with those coworkers quickly Googling an answer they do not understand and submitting that as a suggestion.
Copy the suggestion into chatgpt and ask it to give a polite decline that you're able to ask CGPT yourself if you wanted a response from it. Then just copy and paste it back to them without even reading it.
Although I sympathize and I'm seeing this exact behavior in my own workplace and it's only going to get a lot worse.
Reading the relies here has made me feel lucky in my workplace. Most people here tried using ChatGPT for a while, but the vast majority stopped using it entirely, so this never happens here. I didn't realize how fortunate I am!
Absolutely fascinating read. I am really curious about his artist-colleagues who find ChatGPT to be a boon to their work. That does not strike me as intuitive.
Can't speak for him, but I would guess that the answer is most working artists pay the rent churning out a stream of unfulfilling, uncreative, derivative works (so they can buy themselves time to do actual artist things) requested by the sort of customer who has no idea what their real motivations are for requesting an artist who can create, when what they actually want, is a craftsman who can make.
Most buyers of "creative work" are perfectly happy to pay a premium for something that looks remarkably like something -else- that impressed them, even though this is orthogonal to the purpose of working with an artist.
I don't blame (actual) creative artists for dumping this craftwork drudgery onto AI, but it's hard to say what long-term consequences of tapping an ever-dwindling reserve of unexploited prior creative art will be.
Maybe the world will end up like one of these ancient cultures whose creative inventory crystallized a century or two into their development and remained the same for the next 500 years until they collapsed.
My artist friend uses AI in their creative pipeline to jumpstart the vision in their head into something visual to start from. They then use that to either refine or recreate based on the visualization the AI provided.
If I understand correctly, you're not happy with the way ChatGPT can realise it's own mistake? or that it makes the mistake? I am not really sure, what is the problem exactly?
it doesn't "realize its own mistake", it requires negative feedback from the user to spur it into "apologizing" and trying to fix the "mistake(s)". It doesn't matter if the mistake is real or imagined.
even the uncensored (less RLHF'd) models do this - apologize after negative feedback and attempt to fix.
And "it's" only means "it is", the possessive is "its". You probably knew that, but didn't realize your own mistake without negative feedback.
>It doesn't matter if the mistake is real or imagined.
Is this based on your experience?
My experience with ChatGPT is that rarely it apologises or attempts to fix something that is correct, though sometimes it does apologise and give you an alternative or the exact same working solution.
That you think this is weird advice or advice that makes LLMs look good shows that your position is one of reflective opposition rather than understanding and experience with the things. If you interact with them much it's an obvious course of action. It will come up with nonsense, and if you tell it it's nonsense it will immediately apologise and explain why the nonsense was nonsense. It will come up with good information, and if you tell it the information was bad it will immediately apologise and offer nonsense explaining why. You can easily get it in a loop of it agreeing that solution A is bad and offering solution B, then agreeing that we previously excluded solution B and offering solution A instead. It is a very quick way to get an intuitive understanding of their limitations.
> It will come up with nonsense, and if you tell it it's nonsense it will immediately apologise and explain why the nonsense was nonsense.
This only makes sense if you believe LLMs are something more than they are. As someone who knows how LLMs work pretty well I don't. The presumption here is that the following advice is 100% better or correct and I also don't believe that.
You should be doing this regardless of where the suggestion comes from, just to make sure you all don't miss something obvious, this is literally what happens in troubleshooting / brainstorming sessions.
I think it's a fundamentally flawed premise of the site: that answers ought to be self-contained and all else is noise. No debate, no commentary. That's why they reject opinion-based questions. But not all things in life are cut and dried. I get it, though; subjectivity makes it difficult to promote quality answers.
I get the theory, but I've seen the effects of that decision.
The pre-objectivity police StackOverflow was sooo much more useful (2010ish). 85% of the top 1000 questions had incredibly well written and insightful answers to subjective questions. Things like "How do I become a better software engineer?" which is far more useful than "How can I rename a local Git branch?"
For a time these bled out into other smaller stack exchange communities, but many eventually fell to the objectivity police.
I learned so much as a young dev in 2009 when I first used Stackoverflow. Experts poured their hearts out and shared so much. Now the busybodies are there to police everything and what are you left with... a dead site with nothing but hall monitors! What a shame for the next generation of developers.
Yeah, they pretty much explicitly disclaim those questions that don't have cut and dried answers. But I think that means they never should have dipped into topics like "workplace", which has vanishingly few cut and dried topics.
On these sites the idea is that answers belong in the top-level answer posts where they can be voted up and down for the purpose of sorting the competing ideas and forms of responses to the primary question post. Criticisms should remain in the comments until they are integrated and addressed in the answers, at which point they are obsolete and can be removed. Therefore chats belong elsewhere, and are helpfully moved into chat rooms, which you can click through to and read if you're interested.
AI has created a dichotomy --- those who are below-average in their skills will see it as an improvement, while those who are above average will see it as regression.
AI has created a new standard from which “average” is defined. Rather than a real life analog to a mathematic mean, it’s simply understood to be the level at which AI performs.
It is a tool, those who learn to use the tool will benefit more. average people will be average in using it, smart people have the potential to use it smartly.
"photoshop has created a dichotomy --- those who are below-average in their skills will see it as an improvement, while those who are above average will see it as regression."
"the automobile...
"the camera...
"the printing press...
signed: someone who really does not like these LLM's. Not for what they can do, but for the doubt they bring to the table. This is probably the same feeling as when digital image manipulation got good and you could no longer trust photos.(not that you ever really could. photos, even unaltered ones are notoriously easy to show only what you want them to.)
No, it couldn’t. The tools to which you compare require a different or complementary set of skills to use, and the user must understand the output or mode of operation that the tool conveys.
Prompt engineering (within the context of ChatGPT) is more useful for jailbreaking than using it for its intended purpose, aside from which, the complaint of the OP from the linked post is that their coworkers do not understand the output, if they did, they would not rely on ChatGPT as a smokescreen.
What about googling then? I often google how to do X while writing code or developing because I don't remember the exact CLI flags and reading the manpage takes more time than asking google. I'm not a big LLM user yet but I imagine a lot of people have replaced googling with ChatGPT in their workflow.
As someone who grew up around artists who eventually used photoshop when it was version 1 I can only express how completely ahistorical a recounting is for me. I also know a fair bit about the history of the camera and can firmly say that's also not how that went down.
I'm not looking forward to having to argue with a coworker that thinks MongoDB is web scale because ChatGPT told them so. It will be a hell of a lot more annoying than[1]
The AI wave is proof that with the right marketing strategy you can convince masses of idiots that any lie is truth. The lie here being that chatgpt is intelligent and it produces useful complex output. Reddit is full of downvoted ai garbage. Experts in any field will tell you it produces nonsense. Yet these drones keep thinking it’s accurate.
I had bought into the hype initially, but over the last few months the more I use ChatGPT for real work the more I realize that it is mostly coming from people who aren't actually using it themselves. They are instead hyping the idea of it that exists in their head.
No, my point is that StackExchange (primarily StackOverflow) has been a source for many to grab work product without actually doing the work. ChatGPT is only taking that to the next, more automated, level.
The author makes it clear in the associated discussion that he doesn't immediately assume the suggestions are bad based on their source:
> I take the time to understand and consider each suggestion, not rejecting anything out of hand, and share them with my team members (of which there are 17).
The issue is the time it takes to explain _why_ these are bad ideas to non-technical (and skeptical) colleagues.
That sounds to me less of an issue with ChatGPT, and more with having colleagues that don't understand how to engage in reasonable discussion or evaluate information correctly.
ChatGPT is absolutely a symptom here of the underlying problem. Frankly I think it's possible that OP is being too solicitous with these requests. Why does this other team think OP needs a constant input of new ideas? Why do they feel comfortable repeatedly second guessing OP in OP's are of expertise? It sounds like the planning process is totally out of whack. If they were clipping ideas out of a magazine and asking OP to do them it could be just as bad.
And the question was how to get the their coworkers to stop giving copy/pasted ChatGPT technical answers from prompts written by nontechnical people, not how ChatGPT can do a better job.
Good point. Maybe they can invite a discussion where the suggester walks through ChatGPT steps explaining how it is done? Hopefully people aren't so flippant they hammer their keyboard and hit send email. If that's the case, then maybe they need an ego check.
I think my default response would be (regardless of the ChatGPT instructions):
"I looked it over and it doesn't actually work. ChatGPT is good for many high level things but when the specifics get technical, is struggles and invents solutions that don't exist; like a bloom filter. Ask ChaGPT what a bloom filter is."
Sure it's not the tool if you completely ignore 80% of the story. It's cheaper to ban a tool than "hire people who can critically think", as if I ever believed people could intentionally detect that.
The issue here is that the artists are confident that the suggestions (which they don't themselves understand) are good because they came from ChatGPT.
I would argue that if you cannot answer questions on something generated before suggesting it or handing it off, you absolutely should not be pushing for it. Do the research necessary to feel fully confident on a hand-off.
Hell, if something doesn't make sense, ask it again for clarity. It does a reasonable job at that, but again, before you hand off a solution, make darn sure you understand it, because chances are high, whoever pitches a solution that sounds usable, is going to break ground and work it.
Honestly had the same experience with a junior developer yolo-ing some changes into a PR before even running it themselves to check that it works. It's just a waste of everyone's time and a lack of respect. That's what needs to be conveyed to the artist colleagues.
They aren't good. Also, diffusion models work well for the artists to spit out pixels. The artists assume the LLM generated code is the same quality and that the OP is a fool who won't do what they ask due to lack of skill or stubbornness.
It's messing up the dynamic where creatives come up with blue sky stuff and developers come to a compromise on a possible solution. Now you have this AI model hallucinating plausible, but fake solutions.
The model says what they want to hear because it is a chicken, not a pig in this scenario.
the issue is it takes effort to determine if the idea is good or finding subtle errors in generated code, generating it with GPT requires almost no effort from the person who then offloads it
My gut reaction is no, but let me think out loud using ad absurdum to see if there is merit.
A five year old asks chatGPT how to achieve world peace. The response is not only possible but easily affordable on a short timeline. Do I care that a 5 year old got it from ChatGPT? I guess not. A part of me would want all the adults on the planet to stop and acknowledge how tragically ineffectual they are, but as far as whether the source matters...nope.
Good point. Thanks for letting me think through it.
Yeah, but it appears this guy can't convince the suggestion givers because they lack the expertise to evaluate their suggestions. For instance, they could be asking ChatGPT how to do something in code and then sending that.
What do the artists think of artwork generated with Stable Diffusion?
If you use the ICantBelieveItsNotPhotography model, you can generate some great art.[1] Does your company still need that many artists?
Put it in a text file for pasting repeatedly, and ignore any replies of protest.