> Are we personally comfortable with such an approach?
I am not, because it's anti-human. I am a human and therefore I care about the human perspective on things. I don't care if a robot is 100x better than a human at any task; I don't want to read its output.
Same reason I'd rather watch a human grandmaster play chess than Stockfish.
There are umpteenth such analogies. Watching the world's strongest man lift a heavy thing is interesting. Watching an average crane lift something 100x heavier is not.
Objectively we should care because the content is not the whole value proposition of a blog post. The authenticity and trust of validity of the content comes from your connection to the human that made it.
I don't need to fact check a ride review from an author I trust, if they actually ride mountain bikes. An AI article about mountain bikes lacks that implicit trust and authenticity. The AI has never ridden a bike before.
Though that reminds me if an interaction with Claude AI, I was at the edge of its knowledge with a problem and I could tell because I had found the exact forum post it quoted. I asked if this command could brick my motherboard, and it said "It's worked on all the MSI boards I have tried it on." So I didn't run the command, mate you've never left your GPU world you definitely don't actually have that experience to back that claim.
“It's worked on all the MSI boards I have tried it on.”
I love when they do that. It’s like a glitch in the matrix. It snaps you out of the illusion that these things are more than just a highly compressed form of internet text.
We have many expectations in society which often aren't formalized into a stated commitment. Is it really unreasonable to have some commitment towards society to these less formally stated expectations? And is expecting communication presented as being human to human to actually be from a human unreasonable for such an expectation? I think not.
If you were to find out that the people replying to you were actually bots designed to keep you busy and engaged, feeling a bit betrayed by that seems entirely expected. Even though at no point did those people commit to you that they weren't bots.
Letting someone know they are engaging with a bot seems like basic respect, and I think society benefits from having such a level of basic respect for each other.
It is a bit like the spouse who says "well I never made a specific commitment that I would be the one picking the gift". I wouldn't like a society where the only commitments are those we formally agree to.
I do appreciate this side of the argument but.. do you think that the level/strength of a marriage commitment is worthy of comparison to walking by someone in public / riding the same subway as them randomly / visiting their blog?
I find them comparable, but not equal, for that reason.
Especially if we consider the summation of these commitments. One is obviously much larger, but it defines just one of our relationships within society. The other defines the majority of our interactions within society at large, so a change to it, while much less impactful to any one single interaction or relationship (I use them interchangeably here as often the relationship is just that one single interaction) is magnified by how much more often it occurs. This does move towards making the costs of losing some trust in such a small interaction as having a much larger cost than it first appears, which I think further increases how one can compare them.
(More generally, I also like comparing things even when the scale doesn't match, as long as the comparison really applies. Like apples and oranges, both are fruits you can make juice or jam with.)
That is how illustrations work. If someone doesn't see something, you amplify it until it clubs them over the head and even an idiot can see it.
And sometimes of course even that doesn't work but there has always been and always will be the clued, clue-resistant, and the clue-proof. Can't do anything about the clue-proof but at least presenting the arguments allows everyone else to consider them.
This fixation on the reverence due a spouse is completely stupid and beside the point of the concept being expressed. As though you think there is some arbitrary rule about spouses that is the essense of the problem? The gift-for-spouse is an intentionally hyberbolic example of a concept that also exists and applies the same at non-hyperbolic levels.
The point of a clearer example is you recognize "oh yeah, that would be wrong" and so then the next step is to ask what makes it wrong? And why doesn't that apply the same back in the original context?
You apparently would say "because it's not my wife", but there is nothing magically different about needing to respect your spouses time vs anyone else's. It's not like there is some arbitrary rule that says you can't lie to a spouse simply because they are a spouse and those are the rules about spouses. You don't lie to a spouse because it's intrinsically wrong to lie at all to anyone. It's merely extra wrong to to do anything wrong to someone you supposedly claim to extra-care about. Lying was already wrong all by itself for reasons that don't have anything special to do with spouses.
This idea that it's fine to lie to and waste the time of everyone else, commandeer and harness their attention of an interaction with you, while you just let a robot do your part and you are off doing something more interesting with your own time and attention, to everyone else who isn't your spouse simply because you don't know them personally and have no reason to care about them is really pretty damning. The more you try to make this argument that you seem to think is so rational, the more empty inside you declare yourself to be.
I really can not understand how anyone can try to float the argument "What's so bad about being tricked if you can't tell you were tricked?" There are several words for the different facets of what's so wrong, such as "manipulation". All I can say is, I guess you'll just have to take it on faith that humans overwhemingly consider manipulation to be a bad thing. Read up on it. It's not just some strange idea I have.
I think we are having a fundamental disagreement about "being tricked" happening at all. I'm intelligent enough to follow the argument.
I see that, in the hyperbolic case, you are actively tricking your wife. I just don't agree that you are actively tricking randomly public visitors of a blog in any real way? there is no agreement in place such that you can "trick" them. Presumably you made commitments in your marriage. No commitments were made to the public when a blog got posted.
It's equally baffling to me that you would use one case to make the point of the other. It doesn't make any fucking sense.
Why was it wrong in the wife case? What specifically was wrong about it? Assume she never finds out and totally loves the gift. Is purely happy. (I guess part of this also depends on the answer to another question: What is she so happy about exactly?)
There are many discussions of what sets apart a high trust society from a low trust society, and how a high trust society enables greater cooperation and positive risk taking collectively. Also about how the United States is currently descending into a low trust society.
"Random blog can do whatever they want and it's wrong of you to criticize them for anything because you didn't make a mutual commitment" is low-trust society behavior. I, and others, want there to be a social contract that it is frowned upon to violate. This social contract involves not being dishonest.
We should care if it is lower in quality than something made by humans (e.g. less accurate, less insightful, less creative, etc.) but looks like human content. In that scenario, AI slop could easily flood out meaningful content.
I don't care one bit if the content is interesting, useful, and accurate.
The issue with AI slop isn't with how it's written. It's the fact that it's wrong, and that the author hasn't bothered to check it. If I read a post and find that it's nonsense I can guarantee that I won't be trusting that blog again. At some point there'll become a point where my belief in the accuracy of blogs in general is undermined to the point where I shift to only bothering with bloggers I already trust. That is when blogging dies, because new bloggers will find it impossible to find an audience (assuming people think as I do, which is a big assumption to be fair.)
AI has the power to completely undo all trust people have in content that's published online, and do even more damage than advertising, reviews, and spam have already done. Guarding against that is probably worthwhile.
Even if it's right there's also the factor of: why did you use a machine to make your writing longer just to waste my time? If the output is just as good as the input, but the input is shorter, why not show me the input.
Are we personally comfortable with such an approach? For example, if you discover your favorite blogger doing this.