This is an informative article, but it is missing a couple key things. Maybe the article was strictly aiming to be factual, but it felt like it was taking the position that 230 is good and should stay in place.
With the assumption that it means to argue for keeping 230, they failed to convince me that it was a necessary piece of regulation. Yes, they corrected many misconceptions, but at the end of the day there is still a key argument against 230, and it goes like this:
There are a lot of things posted by 3rd parties on websites that lead to harmful outcomes. Companies that leave that content up are able to make profit from the views/clicks generated. They have already demonstrated the ability to moderate content, they chose not to do so when it’s not favorable to their profit. Why should I protect platforms or give them the option to not moderate harmful content? If you’re making a profit from it, that’s part of your product and you should hold some liability. This wouldn’t absolve 3rd parties that post harmful content from also having liability because the courts regularly rule on splitting liability and deciding which people hold which portion of blame. Repealing 230 allows us to hold companies accountable for leaving harmful content up when they could have taken it down, it would not require the courts to fine them every time something bad happens- it would just give the courts the option to do so when it’s the right call. As of now, 230 is too strong of protection.
I know they can do it. We have real time profanity filtering in video game chat, social media automatically detects faces for tagging, I know with a few automated solutions and a team of mods (paid and/or volunteer), and online platform can do a reasonable job. HN does a great job with surprisingly few mods. Reddit does it and has a pretty robust process for addressing harmful subreddits. YouTube polices non-advertiser-friendly content, I know they could moderate more if required. Sure, some things slip through the cracks, but maybe a competent defense and a reasonable court can decide that a company isn’t liable for one-off failures, provided it has good processes in place to catch most of the issues. Like Ford isn’t held liable every time someone crashes a car and dies, they are only held liable when they are aware of their cars being dangerous and not doing anything about it- but after issuing a recall, the liability is back on the car owner for not getting it fixed. Anyway, the courts and lawyers are well equipped to decide questions of liability without Section 230.
I guess the other thing I want to come back around to that the article didn’t really address, is that right now the protection seems unbalanced/one-sided because it protects platforms from choosing not to moderate, but it doesn’t force them not to moderate. IMHO, if the Facebooks and Twitters are allowed to say “we don’t have the ability to police our content” then they shouldn’t be deplatforming controversial people like Alex Jones. It’s hypocritical to say that tech companies need protections and then be in favor of kicking people off. I’m not actually sad when someone like Jones or Trump gets the ban hammer. But I have a hard time justifying allowing companies to have moderation power and then allowing them to choose only to use it when a situation gets so bad that it affects their profit. Like, you can’t be watching the a house burn while holding a fire hose and not expect me to be mad when you decide to water your garden instead. If you’re actually unable to moderate, you had better not be making headlines by moderation. Because it seems to me like you’re begging the question- oh, so you clearly can moderate some things, let’s figure out how much money you made while ignoring the harm you were causing up until that point.
What I’m actually in favor of by the way, is repealing the wholesale protection of Section 230 and replacing it with something that requires a reasonable level of moderation, or provides a little bit of protection for companies that have made a good attempt at moderation. That would be a much better incentive for companies than the incentive 230 currently provides.
It sounds like you're really just mad because the big companies don't moderate evenly enough for you.
> replacing it with something that requires a reasonable level of moderation, or provides a little bit of protection for companies that have made a good attempt at moderation. That would be a much better incentive for companies than the incentive 230 currently provides.
I'll bite. I think TW/FB have done a reasonable job of moderating. Where do you set the bar? Where should the politically appointed judge hearing the case set the bar? Your solution is just moving the problem directly into the sites of the political party in power.
> It sounds like you're really just mad because the big companies don't moderate evenly enough for you.
You're actually not too far off. I'm mad that companies are protected unevenly relative to normal people, which allows them to moderate unevenly and have no risk of downside resulting from their decisions. A law which protects a company when they do no moderation, but sets no restrictions on who they can moderate is lopsided. My personal viewpoint is in favor of having some standard of moderation, but the inconsistency is frustrating. I lean towards moderation, but I think we ought to be choosing between the options of "some moderation standard/required" or "no moderation allowed," and not the current option which privatizes the gains and forces losses onto the public. If Facebook decides not to moderate a hate group, they get ad revenue from all that traffic. But if that hate group, which Facebook is enabling, organizes an event where they march through a city and beat someone up, then I think that person should be able to sue Facebook (as well as pressing criminal charges and suing the people that performed the beating). The person that got beaten up would still have to demonstrate to the court that Facebook played some role in enabling the beating, even without Section 230.
So where would I set the bar? I would set it at the level that a company isn't at fault if their platform has the right kind or amount of moderation that the bad event that happened was unusual or couldn't have been expected. If someone gets hurt because Facebook didn't police an openly Nazi group, they should be liable. If someone gets hurt because Facebook has decent moderation procedures in place but the Nazi group was sneaky and posed as a sports fan club and used coded messages, then I think Facebook would have a pretty easy defense even without Section 230.
And you make it seem like judges are wildly political and the legal system is unreliable. There are a few bad judges, but 99% of cases that would be brought in the absence of Section 230 are so openly shut that they would be settled before they went to court. And in the cases that went to court, most judges are good people and want to do the right thing, regardless of their political leanings. The political affiliation of a judge really only matters when an issue is close and tough to call. Let's not give wholesale protection to tech companies from all of the obvious and easily decidable cases where they should be held responsible just because we're worried about a few bad calls.
> So where would I set the bar? I would set it at the level that a company isn't at fault if their platform has the right kind or amount of moderation that the bad event that happened was unusual or couldn't have been expected. If someone gets hurt because Facebook didn't police an openly Nazi group, they should be liable. If someone gets hurt because Facebook has decent moderation procedures in place but the Nazi group was sneaky and posed as a sports fan club and used coded messages, then I think Facebook would have a pretty easy defense even without Section 230.
I like this idea in theory. How does this work if the platform promises no moderation? Is that allowed? I'm thinking of a mastodon-type setup where a setup can choose not to do any type of filtering/moderation.
I just started heddit.com, I am one engineer, I have 2 million users, I am making 10 dollars a month from ads and losing 10 from hosting. How do I regulate my content to protect me when 230 is taken down?
Maybe the solution is partial exceptions to 230 on 10 billion dollar+ corps? It seems a more anti trust approach would solve your concerns without removing protections for small upstarts.
I dont think a blanket repeal is going to help the proliferation of free speech, only the proliferation of private speech.
To me heddit.com looks like a parked domain that isn't serving any content, 3rd party or not, so isn't protected by Section 230 and doesn't need to be. Hypothetical examples aren't a great argument against real world situations that have already happened, but a fake example makes me suspect you aren't arguing in good faith.
This is a perfectly sound hypothetical. Do you only allow discussion with people who have experienced a direct hardship? Is that your line for moderation? I'm being antagonistic, but that's part of your 230 fix right? You want judges to draw the magical cutoff line.
So what do you do about companies who run a shoestring budget? They can't play in this game?
No, but the previous comment looks just specific enough that it seems like they are trying to make an example look like it was a real life example (which would give it a lot of weight) when in reality it was a hypothetical example (which gives it less weight).
But to address the argument as a hypothetical, I don't think it's a fully developed argument. Fledgling companies with small user bases have very little liability by virtue of their small community. A platform with a small community can't be used to do harmful things like incite riots and encourage hate crimes unless you are targeting a specific demographic. And if you're targeting a specific demographic that is prone to doing harmful things then you should be moderating it from day 1. If you aren't targeting a dangerous demographic you have to get pretty big before your platform gets dangerous by size alone, and I do think it's reasonable to expect companies to have some moderation figured out before their platform gets so big that it's dangerous. Plus, a small budget doesn't give founders/developers an excuse to not know what their platform is being used for. Hypothetically, if your platform was small and you didn't realize you were catering to Nazis, then you are liable if those Nazis use your platform to organize an armed march/protest that turns into a riot and people get killed. Or at least I think you should be, and that's why I think that Section 230 should be repealed and replaced with something in the middle- not a blanket protection for companies that don't even try to moderate their content, but some protection as long as they are actually trying.
Also remember that not moderating your platform doesn't mean that bad things will definitely happen and that you will get sued- Section 230 only applies to civil cases already, and there must be some sort of grievance for someone to sue you. The cops couldn't proactively shut someone down for lack of moderation. So even without Section 230, a company could roll the dice while they were on a shoestring budget if they decided that it was worth it to delay developing a moderation system in favor of a different feature. That's in their right, they just ought to accept the idea that they might get sued if something bad happened because they prioritized work on shiny features.
So I guess what I'm saying is yes, I don't think a shoestring budget is a defense for giving a company protections. A new car company can't avoid a lawsuit from ignoring a known safety defect just with the excuse that they didn't have the budget. A startup that can't afford to do something right either shouldn't be doing it or they should accrue some risk.
This last summer with the BLM protests the local town FB group got flooded with people behaving badly and eventually the moderators had to step in and effectively ban a bunch of people and prevent discussion on the topic for a time.
This wasn't a group that catered to Nazis. It was a group that asked about the best pizza in town and pointed out when the high school was putting on a play. To say it caught the mods off-guard would be an understatement.
Mods had to step in and save the day, but it basically requires someone to watch the service 24/7 during heated times as the threads blow up quickly. They don't take days or weeks to warm up. It just takes an active community and some catalyst event.
This was effectively just a forum. It happened in a FB group, but could have just as easily been some local dude hosting a phpbb forum. Same thing.
So this is the thing I keep coming back to: why would anyone ever dip their toes in user-created content if 230 is changed/repealed? It's all risk at that point, especially for folks who would aren't quitting their day job for it.
Or to put another way, with 230 as it is, Trump can't use twitter/fb/etc, but he can spin up his own phpbb and start his own community there. Why isn't that ok?
This is completely an argument in good faith, based on a readily accessible hypothetical that almost everyone on HN can put themselves in the place of. This isnt a court of law, I dont need to be personally affected in order to offer an opinion.
With the assumption that it means to argue for keeping 230, they failed to convince me that it was a necessary piece of regulation. Yes, they corrected many misconceptions, but at the end of the day there is still a key argument against 230, and it goes like this:
There are a lot of things posted by 3rd parties on websites that lead to harmful outcomes. Companies that leave that content up are able to make profit from the views/clicks generated. They have already demonstrated the ability to moderate content, they chose not to do so when it’s not favorable to their profit. Why should I protect platforms or give them the option to not moderate harmful content? If you’re making a profit from it, that’s part of your product and you should hold some liability. This wouldn’t absolve 3rd parties that post harmful content from also having liability because the courts regularly rule on splitting liability and deciding which people hold which portion of blame. Repealing 230 allows us to hold companies accountable for leaving harmful content up when they could have taken it down, it would not require the courts to fine them every time something bad happens- it would just give the courts the option to do so when it’s the right call. As of now, 230 is too strong of protection.
I know they can do it. We have real time profanity filtering in video game chat, social media automatically detects faces for tagging, I know with a few automated solutions and a team of mods (paid and/or volunteer), and online platform can do a reasonable job. HN does a great job with surprisingly few mods. Reddit does it and has a pretty robust process for addressing harmful subreddits. YouTube polices non-advertiser-friendly content, I know they could moderate more if required. Sure, some things slip through the cracks, but maybe a competent defense and a reasonable court can decide that a company isn’t liable for one-off failures, provided it has good processes in place to catch most of the issues. Like Ford isn’t held liable every time someone crashes a car and dies, they are only held liable when they are aware of their cars being dangerous and not doing anything about it- but after issuing a recall, the liability is back on the car owner for not getting it fixed. Anyway, the courts and lawyers are well equipped to decide questions of liability without Section 230.
I guess the other thing I want to come back around to that the article didn’t really address, is that right now the protection seems unbalanced/one-sided because it protects platforms from choosing not to moderate, but it doesn’t force them not to moderate. IMHO, if the Facebooks and Twitters are allowed to say “we don’t have the ability to police our content” then they shouldn’t be deplatforming controversial people like Alex Jones. It’s hypocritical to say that tech companies need protections and then be in favor of kicking people off. I’m not actually sad when someone like Jones or Trump gets the ban hammer. But I have a hard time justifying allowing companies to have moderation power and then allowing them to choose only to use it when a situation gets so bad that it affects their profit. Like, you can’t be watching the a house burn while holding a fire hose and not expect me to be mad when you decide to water your garden instead. If you’re actually unable to moderate, you had better not be making headlines by moderation. Because it seems to me like you’re begging the question- oh, so you clearly can moderate some things, let’s figure out how much money you made while ignoring the harm you were causing up until that point.
What I’m actually in favor of by the way, is repealing the wholesale protection of Section 230 and replacing it with something that requires a reasonable level of moderation, or provides a little bit of protection for companies that have made a good attempt at moderation. That would be a much better incentive for companies than the incentive 230 currently provides.