It's pretty revealing that many of the replies to this (and the general response whenever anyone brings up content that we obviously want to be moderated but that doesn't fall outside the 1st Amendment) can be summed up as, "well, we'll make an exception for that." In practice, this constitutes the government deciding what communities themselves are able to moderate and filter. That should terrify 1st Amendment advocates, for multiple reasons:
----
A) It's wildly abusable by corrupt governments, and both parties in the US believe that their opponents are corrupt. You don't even need to outright censor -- you can just classify one political group's speech as harassment and make your own speech un-moderatable.
And importantly, when you do that, it doesn't just apply to one platform like Facebook, it applies to everyone. It applies to independent forums that are owned by the other party. Do you really want to give either Republicans or Democrats the ability to basically at-will strip 230 protections from opposition-run forums whenever they control all three houses?
----
B) It places a burden on the courts that courts are not designed for. Courts interpret laws, they don't create them. The implication behind that is that the people making the laws have a general understanding of their own intent, and the Courts try to figure out what that intent is and what the implication of the letter of the law is. It is unreasonable for people to propose legislation that they themselves don't understand, and then ask the courts to retroactively come up with an interpretation and a set of guidelines that make the laws work. It's backwards design that will lead to a lot of unintended consequences and dumb, harmful rulings.
----
C) Many of these exceptions being proposed even in this comment section are themselves contentious. It is not clear to me, at all, why pornography should have fewer 1st Amendment protections than hate speech. Both are harmful to children, both can gross people out, both cause discussions to degrade, both are things that people don't want on their platforms. Yet, there is a general assumption that of course platforms will be able to filter lewd content, they just won't be able to filter low-key racists and nazis.
This is exactly the kind of content-favoritism that the 1st Amendment was designed to prevent. It is inherently problematic for the government to privilege certain categories of speech over others. The distinction between pornography and hate speech is one of the more blatant examples of hypocrisy, but the problem also comes up in more subtle ways when people talk about banning spam, as if comment spam, self-promotion, and off-topic discussion are some kind of clear categories that everyone agrees with -- they're not.
Reddit, Hackernews, and Facebook all have very different definitions of what self-promotion and spam are. Which definition is correct? Which one should be baked into law? And why? Why is it more reasonable for us to come up with an ad-hoc definition of spam than it is for us to come up with an ad-hoc definition of hate speech? Why should spam deserve less protection under the 1st Amendment?
----
D) Finally, baking these exceptions and moderating decisions into law creates an environment where we're constantly chipping away at the 1st Amendment. The way that the US works right now is that we have extremely broad legal protections, but that the systems built on top of those laws provide additional subjective moderation controlled by the free market, by individual choices, and by communities themselves. This is a good system because universal moderation decisions are practically impossible. In short, it gives us breathing room and allows us to acknowledge that moderation is both important and subjective.
When people tear away independent, private moderation decisions, the only thing that's left to protect people from harassment online is now the law. You lose that subjectivity and that acknowledgement that different forums need to be moderated differently.
People who advocate that private platforms shouldn't be able to make their own moderation decisions should not be surprised to see increased calls for the government to get involved in making certain speech illegal. Even if you could force Twitter or HN to turn into 4Chan, people aren't going to tolerate that. They are going to go to the government and ask the government and the courts to make more speech illegal -- because that's the only avenue they'll have left to protect themselves.
----
I have not seen any proposal, from anyone, not just in general in comments, but from political activists, from bloggers, from lawyers -- I have not seen one single proposal for Section 230 reform that addresses those problems. They all propose an incredibly ambiguous set of restrictions on moderation and then fall back on, "we'll work the details out later." But when you're talking about restricting someone's fundamental Right to Filter[0], the details really matter.
----
A) It's wildly abusable by corrupt governments, and both parties in the US believe that their opponents are corrupt. You don't even need to outright censor -- you can just classify one political group's speech as harassment and make your own speech un-moderatable.
And importantly, when you do that, it doesn't just apply to one platform like Facebook, it applies to everyone. It applies to independent forums that are owned by the other party. Do you really want to give either Republicans or Democrats the ability to basically at-will strip 230 protections from opposition-run forums whenever they control all three houses?
----
B) It places a burden on the courts that courts are not designed for. Courts interpret laws, they don't create them. The implication behind that is that the people making the laws have a general understanding of their own intent, and the Courts try to figure out what that intent is and what the implication of the letter of the law is. It is unreasonable for people to propose legislation that they themselves don't understand, and then ask the courts to retroactively come up with an interpretation and a set of guidelines that make the laws work. It's backwards design that will lead to a lot of unintended consequences and dumb, harmful rulings.
----
C) Many of these exceptions being proposed even in this comment section are themselves contentious. It is not clear to me, at all, why pornography should have fewer 1st Amendment protections than hate speech. Both are harmful to children, both can gross people out, both cause discussions to degrade, both are things that people don't want on their platforms. Yet, there is a general assumption that of course platforms will be able to filter lewd content, they just won't be able to filter low-key racists and nazis.
This is exactly the kind of content-favoritism that the 1st Amendment was designed to prevent. It is inherently problematic for the government to privilege certain categories of speech over others. The distinction between pornography and hate speech is one of the more blatant examples of hypocrisy, but the problem also comes up in more subtle ways when people talk about banning spam, as if comment spam, self-promotion, and off-topic discussion are some kind of clear categories that everyone agrees with -- they're not.
Reddit, Hackernews, and Facebook all have very different definitions of what self-promotion and spam are. Which definition is correct? Which one should be baked into law? And why? Why is it more reasonable for us to come up with an ad-hoc definition of spam than it is for us to come up with an ad-hoc definition of hate speech? Why should spam deserve less protection under the 1st Amendment?
----
D) Finally, baking these exceptions and moderating decisions into law creates an environment where we're constantly chipping away at the 1st Amendment. The way that the US works right now is that we have extremely broad legal protections, but that the systems built on top of those laws provide additional subjective moderation controlled by the free market, by individual choices, and by communities themselves. This is a good system because universal moderation decisions are practically impossible. In short, it gives us breathing room and allows us to acknowledge that moderation is both important and subjective.
When people tear away independent, private moderation decisions, the only thing that's left to protect people from harassment online is now the law. You lose that subjectivity and that acknowledgement that different forums need to be moderated differently.
People who advocate that private platforms shouldn't be able to make their own moderation decisions should not be surprised to see increased calls for the government to get involved in making certain speech illegal. Even if you could force Twitter or HN to turn into 4Chan, people aren't going to tolerate that. They are going to go to the government and ask the government and the courts to make more speech illegal -- because that's the only avenue they'll have left to protect themselves.
----
I have not seen any proposal, from anyone, not just in general in comments, but from political activists, from bloggers, from lawyers -- I have not seen one single proposal for Section 230 reform that addresses those problems. They all propose an incredibly ambiguous set of restrictions on moderation and then fall back on, "we'll work the details out later." But when you're talking about restricting someone's fundamental Right to Filter[0], the details really matter.
[0]: https://anewdigitalmanifesto.com/#right-to-filter