Several months ago, I received a report that I believed to be generated by an LLM, rather than original work. So I asked my coworkers on Slack about their expert opinion: do you believe this is original work or not?
The only answer I received was from a guy who copy-pasted it into an "LLM detector" and uncritically copy-pasted the output for me. This response got several thumbs-up from management and mentors.
Geez, if I'd wanted that kind of crap answer I could've obtained it myself, without asking "experts" for their human opinions.
This is the problem with getting people to understand that ChatGPT can't actually think: a lot of people also don't actually think as a matter of habit.
I feel fairly confident at detecting LLM output in the course of my work, but that doesn't mean that I should unilaterally make decisions based on my own opinion without asking other experts who are right there beside me at work. Is there something wrong with asking for a second opinion, when management has provided to us Slack channels to do exactly this?
I mean yeah, I can understand that, except that it was a team channel, and nobody needed to respond at all. So how in the world was I pinning anyone down? And why did a couple senior folks endorse that answer? Were they pinned down as a result?
You weren't trying to pin anyone down, but from the respondent's point of view, any response they endorse pins them down because it opens them up to being wrong and responsible (this depends on the culture at the company and the individual of course).
The conscientious people in the chat will have a hard time ignoring an unanswered query, so answering with the tool's output alone forms a sort of compromise.
This can potentially be defused by first stating what you think and asking for a second opinion, but of course, the easy out there is to simply endorse whatever is already on the table (maybe like your senior folks).
I was only suggesting that the situation could be more nuanced than a coworker implicitly suggesting "just google it for fuck's sake", but for all I know, it wasn't.
The only answer I received was from a guy who copy-pasted it into an "LLM detector" and uncritically copy-pasted the output for me. This response got several thumbs-up from management and mentors.
Geez, if I'd wanted that kind of crap answer I could've obtained it myself, without asking "experts" for their human opinions.