Yes it does. I am getting honestly tired at defending this position on here, I shouldn't have to explain why it is problematic that a comment or submission is AI generated on a forum that tries to maintain a high standard for discussion.
"High standard" implies more than just efficacy of the content.
Perhaps AI-generated content would be better than human-generated, but I just don't know if I'm ready to read a bunch of articles and perhaps interact with AI chat bots posting comments on hackernews without my knowledge. So hopefully you're human but if not, golden to what is ham semi-you heavily quality them implies of the article though you
I didn't say it didn't matter. I asked whether it mattered. Sorry!
My thinking was even if it is generated I find the comments interesting and engaging more than the article itself - same as with a lot of clearly non ChatGPT HN posts. But I can understand you point, and actually I agree.
I appreciate you changed your mind, but more and more often I read someone, playing the devil's advocate, asking whether it is a big deal if one posts content straight from ChatGPT.
To me it is absurd to even ask, and it is mind-numbingly tiring to have to explain why I would rather talk to humans. The fact that an increasing number of posters don't seem to have a problem with that makes me think this platform's quality of discourse will not last long (and the rest of the internet at large, but today I'll tone down my usual dead-internet doomsday predictions)
What makes you sure that the quality of AI generated content (that has potentially been the result of prompting and editing by a human) is and will be inherently worse than purely human generated content.
Where do you draw the line? Is using translate as a foreigner a problem?
> What makes you sure that the quality of AI generated content (that has potentially been the result of prompting and editing by a human) is and will be inherently worse than purely human generated content.
For the time being, I don't think you can trust AI generated content; quite often, when I asked chatGPT something I had to be sure of, it made mistakes. Take erroneous citations and references: do you think humans fake them the way chatGPT hallucinates them?
> do you think humans fake them the way chatGPT hallucinates them?
They don't need to 'fake' them; they can just be inadvertently wrong.
I have good, human, friends who tell me erroneous things all the time. I don't take them at face value, I check them. I do this for pretty much nearly every piece of information I get where it's going to inform a decision or point-of-view I'm going to take. Why should we be any less vigilant with a technology like ChatGPT?
What I implied is that we should be more vigilant with chatGPT. I don't think it is common for an article to completely invent a reference that does not exist, but it is common in chatGPT.
How did I not notice this analogy?! It is in fact true that a JPEG artifact upscaled with fabricated details is the same thing as a totally forged reference of a paper describing when to diagnose appendicitis in children. Thank you.
Yes it does. I am getting honestly tired at defending this position on here, I shouldn't have to explain why it is problematic that a comment or submission is AI generated on a forum that tries to maintain a high standard for discussion.