That's because this stuff was a tiny fraction of the activity on Facebook (it's HUGE, and still mostly for baby pictures), but a substantial fraction of the content on Parler and Gab (they're tiny, and mostly for the stuff that Facebook and Twitter ban).
A lot of people who object to Parler's treatment want this conceived as a simple binary, but it's more of a matter of degree an proportion.
I'm sure removal was still weighed against how much would be lost (financially, functionality lost by people who isn't violating rules) etc.
In the Facebook case, there is some moderation (although it obviously failed), and the fraction of users who aren't doing anything wrong is absolutely staggering. The alternative to FB if banned, for people in that category is hard to find.
On Parler, the moderation was basically non-existent, the amount of well behaved users is small (although still a majority even on Parler obviously) and the available alternatives to those users are readily available, such as Twitter, making the damage to those users smaller.
This isn't true... Parler does remove calls for violence when they are reported.
I used to use Parler before the ban and I don't remembering seeing any calls for violence. I'm not saying it doesn't exist because to be fair I wouldn't of been in those circles anyway, but it seemed quite tame in contrast to what I see from the average person I follow on Twitter where I regularly see people posting violent Tweets about Trump, his supporters, and even tweets supportive of the violent protests we saw earlier in the year. It's only been in recent weeks that those calling for violence throughout 2020 are finally denouncing violence.
Everyone who is acting like Parler was this really radical place just didn't use it IMO. I think people are clearly right when they say this about platforms Bitchute and some of the Reddit alternatives, but as far as I could tell the content on Parler was just like every other social network, just a little more conservative leaning.
If Apple is that concerned they should ban the Saidit app where there is Jewish conspiracies and far-right content everywhere. I strongly suspect they're lying to you when they say it was because of a lack of moderation. Just like how Twitter allows Richard Spencer to remain on their platform while claiming Trump has gone too far. Evidence would suggest they don't target extremism, they target popular conservative commenters and platforms.
No, Apple pulled it from the Apple store because Parler expressly refused to moderate the violent content even once it was made aware of it while still taking the time to moderate and ban left-leaning posts, indicating that it had the resources to moderate, it simply chose not to.
I know, but do we really expect Apple to start World War III by banning Facebook from the App Store? We have to think in terms of reasonable solutions.
>That's because this stuff was a tiny fraction of the activity on Facebook (it's HUGE, and still mostly for baby pictures), but a substantial fraction of the content on Parler and Gab (they're tiny, and mostly for the stuff that Facebook and Twitter ban).
There's no source, but it stands to reason: Facebook has something like 2 billion daily active users and it its roots are in nonpolitical social networking (meaning baby pictures, etc.). Parler had something like 2-3 million daily active users at its peak [1], and most of them joined after the Twitter started putting warning labels on false claims of election fraud. Parler also billed itself as a "free speech" social network, which in practice means allowing things other social networks prohibit, which means its mainly gets users who want to post and read such stuff [2].
It still has to be supported with logic and precedent. You could compare the action to the action of others like Facebook using metrics. If something is entirely subjective, then you would violate equal protection when one person is guilty and another is not, just based on the judge who heard it.
> Sounds like some serious mental gymnastics to me. And also not provable with any specific metrics.
It's not really, it just takes the concept of collateral damage into account. Should a billion Grandmas be denied their baby picture fix on account of a moderation team missing a few thousand users' insurrectionist and para-insurrectionist posts? Obviously not, since that action is high on collateral damage.
That said, Facebook obviously needs to do a better job here, and this is one more example of their foot-dragging causing problems.
Moral of story: FB and Twitter can do whatever they want, because they have more baby pictures. Any new social network has fewer baby pictures, and therefore can be held to a higher standard, to the point that it will be obliterated before it has a real chance to compete.
> Moral of story: FB and Twitter can do whatever they want, because they have more baby pictures. Any new social network has fewer baby pictures, and therefore can be held to a higher standard, to the point that it will be obliterated before it has a real chance to compete.
Eh, not really. You're forgetting the other half of the equation: Parler's niche was the stuff Facebook and Twitter had either banned or discouraged (like false claims of election fraud).
The real moral of the story is: don't try build your social network from Facebook and Twitter's concentrated dross.
That doesn't seem to help your argument: it means that people deplatformed from FB/Twitter should also be deplatformed everywhere else, which doesn't sound like a great policy for freedom of speech or fostering a competitive marketplace.
There was a very real chance that Parler could have attracted a lot of mainstream people. Maybe the 70M people who voted for Trump would have gone there, and started posting baby pictures, which would improve their ratio a lot.
>which doesn't sound like a great policy for freedom of speech or fostering a competitive marketplace.
It's entirely possible and reasonable to argue that the current limits on freedom of speech are insufficiently broad, and that the "competitive marketplace" of ideas has very little going for it empirically.
People deplatformed from facebook and twitter were deplatformed for a good reason. So yes, they should be deplatformed everywhere else.
And this policy works just fine with freedom of speech, seeing as hate speech, threats of violence and insurrection are not covered by the 1st amendment, which doesn't even govern these companies anyway.
> it means that people deplatformed from FB/Twitter should also be deplatformed everywhere else, which doesn't sound like a great policy for freedom of speech or fostering a competitive marketplace.
That's actually a great policy, because FB/Twitter aren't super-eager to ban people for their speech, so the ones they systematically de-platform are usually doing something pretty bad. Note: I'm not saying they always make the right call 100% of the time.
Freedom of speech is not freedom of reach. Society is not obligated to give bad ideas an audience. In fact, it's doing its job if it filters those ideas out.
Facebook has a huge moderation team. Just because some things get through that cause damage doesn't put it in the same category as a platform specifically built to enable terrorism.
I’m sorry but this isn’t Reddit, you can’t just claim a platform was specifically built for terrorism because you’re upset.
It has definitely attracted an alt-right crowd, but “specifically built to enable terrorism” is some ridiculous cable-news-level propaganda.
I’d much rather this conversation be about free speech and where lines can be drawn — and it bothers me that platforms can be taken down everywhere because of an unrelated group that happened to use them for something horrible. What about Signal? It’s been getting lots of popularity recently — what if it comes out that the terrorists are on Signal now, and there’s nothing they can do to be moderated because of the encryption. Will Signal be taken down for refusing to add a backdoor?
If you build a platform specifically to house/attract people who were banned from typical platforms because they had a tendency towards promoting violence, then I would argue that you are very much enabling (possibly even encouraging) their behavior. I believe that is a pretty logical sequence, and a clear line to draw.
There are very few people who earnestly want an unmoderated place of discourse, because those serve very little functional purpoae. Eventually most people will find something either irrelevant to their interests or personally repugnant presented to them and will go back to a place where there is some degree of moderation in place so that they can consistently find thing that interest and engage them. Why are you on HN and not one of these wholly unmoderated forums? Even curation of topics is a form of moderation, not to mention HN's strict approach to actually thoughtful commentary. The people who earnestly want a wholly unmoderated space are increasingly likely, depending on their desire for it, to be one of those people engaging in something so boorish that it got them removed from moderated spaces.
Furthermore, there is no small amount of irony in you saying you'd rather talk about free speech right after telling someone what they can or cannot claim.
> there is no small amount of irony in you saying you'd rather talk about free speech right after telling someone what they can or cannot claim.
You can't make those claims and expect people to take you seriously without backing them up.
> There are very few people who earnestly want an unmoderated place of discourse, because those serve very little functional purpoae.
Do you mean unmoderated or simply moderated to your specific standards?
Parler was never unmoderated.
You are defending deplatforming, while simultaneously telling people to go to different platforms if they want different standards of moderation. Do you see how this doesn't work?
You're right, it's not Reddit, and "because you're upset" would take it in that direction. Let's not.
> it bothers me that platforms can be taken down everywhere because of an unrelated group that happened to use them for something horrible
Does it bother you that many people are calling for exactly that wrt Facebook right now? I just checked comment history and saw no evidence of that, but I figured I'd ask instead of pretending to be psychic.
> Will Signal be taken down for refusing to add a backdoor?
I don't think anyone, including Apple or Google, considers Signal to be in the same category that requires moderation. Why not? Because there's this commonly applied but never defined distinction between public and private communication. Facebook is considered public, even though some communications there can be private. Signal is considered private, even though you can form pretty large groups of people who are nearly (but not completely) strangers. I wish someone would codify the difference, and its implications wrt moderation/takedown requirements. The lack of clarity around such issues is why both posters and platforms can claim immunity while toxic content spreads.
I create a new social network startup. Early on, most of the people it attracts are those banned from Twitter and Facebook - since they don't have a lot of other options. In addition to normal social network things, they post some questionable and inciting content. Since I'm a startup, I have a small moderation team and no fancy AI moderation so most of it slips through the cracks. Is my social network "built to enable terrorism"?
The political situation we find ourselves in, even though it was allowed to fester for years on established social media platforms, seems ideal for securing a monopoly for those same platforms. How could a competitor get a foot in the door without being accused of catering to extremists?
Let me paint a hypothetical.
[...]
Since I'm a startup, I have a small moderation team
and no fancy AI moderation so most of it slips through
the cracks. Is my social network "built to enable
terrorism"?
Not hypothetical to me!
I once ran an for-profit online community. It was a startup, with strictly volunteer moderators. It was an early "social networking" thing; honestly more like "a BBS with some primordial social features". But hey, sounds like your hypothetical to an extent.
This doesn't mean I know anything.
Just means I'm sympathetic to the plight of folks trying to make that sort of thing a reality. For the record, I'd sure like to give it another try at some point myself.
Anyway, intent matters here, to an extent. Parler advertised itself as a more or less moderation-free space.
That's quite different thing from Twitter and FB, with their codes of conduct and actual moderation teams. I mean, the line may be fuzzy, but it's there. I'll be the first person to say that Twitter and FB suck, and moderation efforts for advertising-driven user content mills are probably eternally doomed because their very business model dictates that their user-to-moderator ratio is always going to be laughably huge; far too large to enable effective moderation barring some kind of generational leap in AI moderation tools.
But there is at least the semblance of a good-faith effort there from those two, as much as I dislike them.
The notion that you'd need moderation should not come as a surprise to you. It's not 1997, so it's not like you don't know that this kind of thing happens. If you want to build a social network, handling the moderation load is part of your job, not an afterthought.
We absolutely allowed large social media platforms to get away with it for far too long. It's not the only thing making them a monopoly -- the network effect of having all of your friends in one place is also a significant barrier to entry for any new social media site.
Fixing that after the fact isn't easy. But it doesn't mean that you can act as though you're not very, very, very late to the party in trying to establish a new social media site. In 2021 it's part of any new site to make sure you're not being used for crime -- or at least making enough of an attempt that authorities don't see you as being implicated.
Seems like a bit of a catch-22, doesn't it? If we set moderation standards too low, then big social-media companies are evil because they're not doing anything about harmful content. If we set those standards too high, then big social-media companies are evil because monopoly. And there never seems to be any space in between. We need something better than just an excuse to hate on Facebook/Twitter/YouTube/etc.
"a platform specifically built to enable terrorism" this is hyperbole. We shouldnt have 2 companies arbitrarily determining what the thresholds are for a service/app to exist.
Parler is in the same category as facebook and twitter. It’s amazing that people have been gaslit to believe that Parler was intended for or mostly used by extremists. More amazing that people keep repeating this authoritatively when they clearly had no exposure to the service.
Yeah, it's the same category in the way a truck and sedan have 4 wheels. It's amazing that people have been gaslit to alternatively believe it was this was some secure, free speech alternative to Facebook when it's quite evident w/ the data pulls that they had no intention of doing so and were at best, incompetent.
They were trying to growth hack using an extremist leaning, marginalized audience and got burned for it. Roll the dice, accept the outcome.
The same argument was made for Backpage. Only a small percentage of overall transactions were related to prostitution. The founder ended up in jail regardless.
The idea that a fledgling social media company needs a moderation effort akin to Facebook in order to be allowed to even exist seems very anti-competitive to me.
Content still slips through the cracks on Facebook too. Parler had a moderation system in place, although it was jury-driven and not centralized.
> The idea that a fledgling social media company needs a moderation effort akin to Facebook in order to be allowed to even exist seems very anti-competitive to me
Moderation seems to be an activity that can scale linearly with users (content). So if you have 1/1000 the user base then you have roughly 1/1000 the moderation effort.
Of all the barriers to entry in the social
media market, moderation seems like the smallest one because it doesn't scale as well as other activities such as operation costs, development staff , and so on.
Of course, if you start a social media company with the intent to be a haven for content that "takes a lot of moderation effort" then obviously you are setting yourself up for a situation where it's difficult to compete. If you need 10x the number of moderation actions as facebook or twitter, then you have 10x the cost too.
That "fledging" social media company was personally funded by a billionaire media family.
At what point do we stop pretending that a company with access to tens of millions of dollars of funding (or more) is a "fledgling" company that isn't responsible for its own failures?
That’s a great question for these app stores. How much nazi content and calls to violently overthrow the government is too much to be allowed on the app store? Kind of like asking the FDA how much arsenic should be allowed in my Cheerios. I’d like to see what their idea of the right threshold is.
Is the implication that one instance of objectionable content is enough to warrant deplatforming of the site? That seems like a really easy way for one competitor to take out another.
Not trying to imply anything. The threshold could be non-zero. The FDA notably has a non-zero threshold for things like allowed rat droppings and hair, insect parts, etc. [1] in your food, since no scalable process is perfect. I’m curious to know exactly how many or what percentage Nazi posts are OK for an app in the App Store.
That's because this stuff was a tiny fraction of the activity on Facebook (it's HUGE, and still mostly for baby pictures), but a substantial fraction of the content on Parler and Gab (they're tiny, and mostly for the stuff that Facebook and Twitter ban).
A lot of people who object to Parler's treatment want this conceived as a simple binary, but it's more of a matter of degree an proportion.