There were no good choices for Cloudflare here, and everyone across the internet who jams their fingers in their ears and shouts their position repeatedly is just contributing to the problem.
Private companies should not be the de facto moderators of free speech in our society. They are forced into that position by woefully inadequate governance by legal authorities operating multiple decades behind the current landscape.
Given that they should never be in this position, Cloudflare is choosing between "platforming the bad guys" and "censoring free speech". They have navigated this imperfectly, but have done better than most would, I think.
I truly hope that those unsatisfied with this outcome (which I suspect will be literally everybody) can take this as an opportunity to go help pressure their respective governments to figure out what the hell should be done, systematically, about hate speech on the internet. It's only 25 years overdue at this point.
> Private companies should not be the de facto moderators of free speech in our society. They are forced into that position by woefully inadequate governance by legal authorities operating multiple decades behind the current landscape.
That's not what happened here. They made an appropriate decision.
It's not a difficult to say, "while we have no policies that restrict lawful content, we reserve the right to not service those who host and promulgate content that explicitly creates emergency threats to human life."
People and their companies aren't computers who have to allow everything to meet some absurd MVP product definition of false fairness.
To be even clearer, in this case it’s not even really quite clear that this was legal content at all! Coordinated stalking of people!
Of course you could say “the legal system should handle it”. But what serious company says “let’s wait for a court to maximize our legal exposure”. The guy cited hard cases. This seems pretty easy!
And of course, why does Cloudflare proactively take down other sites that have anything to do with sex work but require a billion justifications for sites like this?
Because we’re a U.S.-based company subject to SESTA and the one site in question we took down affirmatively told us they were violating SESTA. SESTA is a very bad law. But, if you’re violating it, don’t wave that fact in the face of your infrastructure providers who are liable under the law for providing service to you. We continue to work to overturn or repeal SESTA.
We never told you we were violating SESTA. We never waved it in your face. You could have given us some warning, but you didn't.
Until you show evidence on your work to overturn/repeal SESTA, I'm going to call bullshit on that.
Cloudflare knowingly fronts many other sites that are clearly violating SESTA, so obviously you don't think it's that big of a liability as you claim to be.
Not to mention your Head of Sales reached out to us offering Cloudflare services a year after kicking Switter off when we mentioned we were dealing with DDOS attacks as an escort directory.
I do understand that Cloudlfare can't just violate SESTA/FOSTA.
That being said, the communication and messaging around those decisions were clearly different than what's happened with Kiwi Farms. I'm not expecting Cloudflare to violate the law, but my goodness is it really obvious to me that taking down Kiwi Farms was a much harder decision for you than taking down those sex sites.
This kind of feeds into my long-running criticism of how Cloudflare handles adult content in general. You launched a DNS filtering service that accidentally censored the GLAAD website -- and to be clear, my beef is not that Cloudflare made a mistake and I'm not implying that any of that censorship was intentional. My beef is that I can't imagine you making that same mistake around bigoted content. I firmly believe that if you were launching a DNS filter for hate speech, you would have done more testing before you launched it. You would have been scared enough about that filter that you would have made sure it wouldn't accidentally censor a mainline political blog.
But to this day, 1.1.1.3 filters adult content but not sites that are dedicated to hate speech. Kiwi Farms wasn't blocked from 1.1.1.3. That may not be intended as a statement, but it sure reads as one. It is impossible for me to look at those decisions and not come away with the conclusion that you are more comfortable censoring explicit material than you are censoring violent speech.
And it does make it harder for me to believe you when you claim that taking an absolutist position towards platforming even organized doxing sites is protecting marginalized groups. Because you're already launching your own services that make it easier for network operators to attack those marginalized groups; they're not seeing the same level of consideration that doxing sites are getting.
I lost a lot of respect for Cloudflare's "free speech protects everyone" argument when 1.1.1.3 launched. You can't simultaneously argue to me that we have to care about the principles of speech when it comes to banning a doxing site, and also that technically your sex-specific DNS censorship service is optional so it has no implications for free speech and it's just fine.
Cloudflare will ignore reports of DDOS-for-Hire websites that are illegal almost everywhere in the world, including the US. So, you see yourself as free to ignore laws when you feel like it?
I was thinking “if I was your lawyer” and didn’t type it. Then hit enter and saw my message, and edited it. The problems of getting in flamewars while making breakfast!
I would recommend to anyone to follow the advice of their lawyers regarding posting to hacker news! But it’s mainly a joke
EDIT: and to the original post, I was being way too glib. I do kinda believe what I say but there are less agressive ways of saying it. Again, breakfast posting, but legitimately touchy subject for obvious reasons and I should keep my cool.
> to be under oath and claim nobody at your company was like “maybe this site is coordinating illegal activity” and you said “nah” and continued to provide services for them?
I think Cloudflare did the right thing. But I'd fight for a CEO's right to make calls about user-generated content without worrying about liability because someone suspected something.
I suppose the contention here is that at one point you’re looking at a website, are told its modus operandi, see a lot of the content it hosts… and at one point 230 starts being less relevant.
Like if you have multiple incidents with the same site at one point you need to actually acknowledge that these incidents are here! You might still declare “it’s ok though” but honestly that arguments way easier with something like Reddit compared to something “single-use” like KF.
Obviously not a lawyer, but it feels possible to argue this in a securities fraud case
Is that a rhetorical question or a sincere one? Legal liability. American law has lots of direct liability for Cloudflare under SESTA/FOSTA for being involved in sex work websites. There's not equivalent liability for hate websites.
It is rhetorical. CF admits that they have contacted law enforcement several times about contact in kiwi farm. They clearly get the hosting of the “problematic” content. They understand how that site is used. That seems like an admission of guilt to me!
Anyone who wants to be a bit of an activist investor: who at the company is putting the company at needless legal risk?
As far as I'm aware, KF didn't come with the kind of direct liability under US law that sex work content does. I welcome the chance to learn that I'm wrong here!
This seems like the correct thing. They realise the content is dodgy.
Report to law enforcement repeatedly hoping that they look at it and give them a legal reason to shut them down.
If they shut things down as a private company, so long as the customer is not in breach of the service contract and the content is not illegal, couldn’t they mount a defence?
This sounds like a reasonable strategy. I don’t understand this need for private justice.
> And of course, why does Cloudflare proactively take down other sites that have anything to do with sex work but require a billion justifications for sites like this?
It’s almost certainly cultural more than anything else. Sex workers are regarded negatively by vastly larger proportions of most communities, while hate groups are incredibly partisan. Not that it justifies the distinction, if anything it should be a clarion call to humanize and decriminalize sex work.
> It's not a difficult to say, "while we have no policies that restrict lawful content, we reserve the right to not service those who host and promulgate content that explicitly creates emergency threats to human life."
And if everyone did that, its the exact same as government censorship minus any sort of due process or redress ability.
There's this weird idea that government censorship is abhorent but private censorship is somehow without sin, even when the results are basically identical.
They can't directly remove your physical freedom but corporations especially when acting in unison can remove most of your economic freedom. If enough precedents are set where large service providers deny service to groups and individuals at the behest of the mob, eventually it will become politically and financially expedient for these providers to pre-emptively deny service to a whole basket of people.
It's somewhat amusing that corporate run dystopias were always imagined as a product of unfettered libertarian policy in science fiction and film but we may very well slide into one being pushed the whole way by the very people who decried such policies in their youth during the 80s and 90s.
It's not difficult to say, but it can be difficult to live at any meaningful scale. That invites endless pressure campaigns and similarly endless accusations of acting arbitrarily, capriciously, or with insert-bias-or-agenda-here. None of those are free to handle in any responsible or timely fashion. Never mind what happens should a genuine mistake be made.
It puts the company in the same position as Facebook in regards to moderation. It's endless, expensive, and your work is never good enough. Not a desirable position for most.
100%. To me this isn't really about KF (which clearly sucks and should be offline, but through actual legal processes), this is a matter of, "When does internet infrastructure end and content moderation begin?" As I mentioned in a previous discussion[0], Cloudflare finds itself right at the blurred edge of this line, made more complicated by CF providing both hosting, which is generally seen as content, and DDOS mitigation, which is more ambiguous.
The same people who cheer this decision wouldn't be happy if, say, DNS servers refused to resolve mega.io because it hosts illegal pornography. Or if their ISP started blocking PTP or nyaa.si for copyright infringement. This is to say nothing, of course, of any suspect political interference in internet infrastructure, which we already see around the world[1].
On this topic of Cloudflare "finding itself right at the blurred edge of this line", people might find the Twitter account of Blake Reid--a Clinical Professor of Law at the University of Colorado Boulder that works a lot on both network neutrality and section 230 issues--interesting (and not just this one thread I have linked to here).
There were no good choices because they didn't think through their ethics in advance -- even given their history with other sites like Daily Stormer... They decided they were "just" an economic entity, not a moral one. Unethical use of the services was something that tainted the buyer, but not the seller, and besides, should they really take on the obligation to think about such difficult non-technical things when that could be pawned off on lawyers or politicians or something?
The moral actors in their vision of the world are the "end users" -- the specific individuals using a platform for morally questionable purposes -- and the "government/legal system" which should be doing more to stop them from doing so. Platforms are these magical things that only have technical, legal, and financial obligations, not moral or ethical ones.
I personally don't agree with that view. Any large company doing business faces various ethical challenges. Failure to grapple with them in a serious way means Cloudflare's ethical challenges lead to 'one off' band-aid solutions rather than building a platform upon which to build to handle future difficult decisions.
This is over until the next one, and nothing obvious was learned.
>100%. To me this isn't really about KF (which clearly sucks and should be offline, but through actual legal processes)
Agreed! The problem is that I am not seeing a way to get there. I also don't see any incentive for the legal system to change. In fact I think there are far-right elements who probably see the situation as a Good Thing.
I put Infowars forward, there has been an actual libel conviction, the perp openly lied in court and was called out by the judge and when it’s all said and done infowars will be doing the same thing and Alex Jones won’t be materially that much poorer, in fact some of the right wing media is calling it an assault on the first amendment and potentially going to market it. (I think it was Kirk on The First that I saw claiming that it was a liberal attack on the first amendment)
Complete and pure free expression seems like a concept for gentlemen and we are very much in a post-gentlemen US right now. I agree that there should be a legal process but by the time it can execute, kiwi farms will have morphed in to something new.
> I truly hope that those unsatisfied with this outcome (which I suspect will be literally everybody) can take this as an opportunity to go help pressure their respective governments to figure out what the hell should be done, systematically, about hate speech on the internet. It's only 25 years overdue at this point.
This is a very good takeaway, as it is a complex problem. But I think in the interim, it's perfectly fine for private companies with no legal obligation to keep sites like these operating to just choose not to do business with them.
Yeah regardless of your stance on KF, you have to support cloudflare as an independent business to decide who they want as a customer. KF has many other options to serve their site. It’s really their own fault for using a product like Cloudflare that can be easily coerced into dropping a client through a Twitter mob.
>Hate speech in the United States cannot be directly regulated by the government due to the fundamental right to freedom of speech protected by the Constitution. While "hate speech" is not a legal term in the United States, the U.S. Supreme Court has repeatedly ruled that most of what would qualify as hate speech in other western countries is legally protected free speech under the First Amendment.
There is no government involved here though. If Cloudflare has a hate speech company policy, then as a company they can choose who they serve.
Kiwifarms is free to get another company that aligns with their goals.
I'm having trouble understanding how 1) everyone thinks Cloudflare is the singular hole through which the internet flows and 2) how a private company does not have the freedom to do what they want.
If you don't like what Cloudflare is doing, then speak with your wallet and don't use them, there are numerous other providers of ddos protection
See the parent comment I was replying to. We agreed on the statement that "Private companies should not be the de facto moderators of free speech". I just think it is weird to argue this in a thread where we have just turned 180 degrees from our original premise.
Which is how we end up here with a thread full of people morally outraged that a group decided to not tolerate hate speech, harassment, stalking, and driving people to suicide. I think you’re right that a majority doesn’t agree with the content but a majority certainly tolerates it.
The economy of Germany is a highly developed social market economy.[24] It has the largest national economy in Europe, the fourth-largest by nominal GDP in the world, and fifth by GDP (PPP). In 2017, the country accounted for 28% of the euro area economy according to the International Monetary Fund (IMF).[25]
The EU has a similar population size, wide censorship laws for this kinda of conduct, and a similar economy size. China has a similar economy and much stricter speech laws. The US isn't economically special because of free speech
The lack of free speech laws means its impossible to accurately determine whether the reported figures are accurate, since the government can just censor what they don't like. So no, I (and I imagine most conservatives) don't accept that conclusion.
oh ok, well if were in the conspiracy zone then you just have to accept that actually they have a 1 million times greater economy, because no figures can be trusted.
Facetiousness aside, the censorship laws in the EU are around hate speech like flying Nazi symbology, not laws that let them silence people reporting economic numbers.
The problem isn't that nobody understands that free speech is not limitless, the problem is that literally nobody wants to be in the business of defining the exact boundaries of allowed speech and how to enforce it; there is no perfect answer. Cloudflare was taking the position that it's not their job, and they're not alone as far as internet services go. There are, in fact, other hosts that do basically the same thing, see Nearlyfreespeech for example.
My point isn't to weigh in on this specific decision, but I want the rhetoric around this stuff to evolve away from pretending that defining the boundaries of what speech should be protected is super easy and objective. It's really not, and it never will be.
> literally nobody wants to be in the business of defining the exact boundaries of allowed speech
That's because there shouldn't be one global boundary enforced centrally. This kind of problem is a direct consequence of the scale and alignment miss-match between the technical structures (here cloudfare) and the scale at which there is political cohesion (apparently much lower scale here, since there is such an irreconcilable disagreement). Each politically cohesive group should have the ability to make their own policies. That's how federated things work (email, mastodon or bgp). Hence these kind of clashes we get regularly because of the size of most things has become so huge which is completely nonsense imho (eurozone, food/simple goods production, media).
Are you suggesting that large services like Cloudflare shouldn't exist, and instead there should be an ecosystem of DDoS-filtering reverse proxies? I do agree to that, though I think the problem remains that most of them do not want to be in the business of trying to decipher law and morality. So at the end of the day, the buck does stop somewhere.
And frankly, the example of Mastodon doesn't inspire confidence. Mastodon instance-level blocks have turned the platform into a huge mess. I genuinely would not be surprised if there was no single instance I can sit on that will allow me to interact with everyone I know on Mastodon, and as far as I've heard, if I choose to host my own instance and not to block certain instances, this will lead to my instance being blocked on some instances. I could be exaggerating a little, but this seems quite annoying.
Admittedly, e-mail works a bit better in this regard, but it's certainly not without issues (SPAM, deliverability, etc.) Still, it's perhaps possible that platforms that deal with public broadcasting are inherently more sensitive to cultural clash than ones that only deal with more or less direct communication.
Consolidation of power into platforms like Cloudflare is still a problem, but even if we fix that, we still have another service that has a lot of people all in one big amorphous zone: the Internet. I do wonder a lot if the Internet will ultimately wind up unifying a bunch of cultures, or if it will wind up creating even more bifurcation than it eliminates. It's starting to feel more and more like it creates more bifurcation, just a bit.
I don't understand trying to hide behind an argument other than "my tribe is in power here so you submit to our rules". The post you linked falls apart with how you define various concepts (how you define the terms of peace, etc.). The argument is eventually settled (like nearly all of them these days) by who is in power. They'll define the terms of peace and in a way that paints their causes as good and their enemies as evil.
If this is how you view someone using their weight to protect people from literally being stalked, harassed, and driven to suicide for fun then gods help us all if that ever stops being the case and the next tribe decides that it’s fine.
This act is evil and morally wrong in all political reference frames. Anyone who’s arguing that it’s partisan or woke is focusing too hard on the victims, trans women, and not hard enough on the perpetrators.
This is a nice platitude, but I'm not seeing the relation. Service providers that follow the law will in fact, stop tolerating a client when law enforcement tells them to do so.
> Hate speech and organizing to harass and other IRL gate acts is not free speech
“Free speech” is a philosophy. It makes no sense to describe a particular expression of speech as free or not. Hate speech is speech. Whether one should be free to make it is another question.
I think it’s easier than that, hate speech just isn’t speech, it’s an act of violence that happens to use your mouth, pen, or keyboard — 1A as it’s currently interpreted is way too broad, the court seems to find other forms of violence, even those done for political expression, as not protected, but gives exception here for some naive “sticks and stones” argument.
It is in America, for the most part. You can absolutely organize to harass people if the harassment is in the form of verbal abuse, for instance. Cloudflare is saying that something happened in the last few days on KF that was a genuine "emergency". I don't think this is just an excuse, actually – Prince seems unusually committed to honesty about this sort of thing. I presume people were organizing specific violent acts on KF, which is not "free speech" even in America.
Your second item isn’t even close to an illegal threat in the USA, it’s the loser equivalent of saying “mine is 12 inches”. It’s not true, never happened, and nobody believed them.
> second item isn’t even close to an illegal threat in the USA
They said they called people to "plant bombs" where the woman was going and had posted "armed men...waiting" for her. That sounds an awful lot like "the speaker means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual" [1].
If hate speech is not free speech, then they who define what is hate speech, define what you can or cannot say.
If there's defamation, harassment, or incitement to violence, we should deal with that in an open court with juries of our peers, not in some dark board room.
That's easy. Are they calling for illegal actions against people?
That's a pretty easy litmus test.
As a colloary, it's akin to comparing "I don't like the president" vs "Let's go kill the president" (this is a comparison of allowed vs unallowed speech in the USA, not a call to).
Advocating voting against is 100% legal. Advocation of killing is 100% ILLEGAL.
Kiwifarms was doing the latter, up to and including actions threatening violence, "assisted" suicide, and murder.
Same, there’s no contradiction here. The same political group that is pressuring the companies that currently hold the real power to take down this hate is also pushing for a national hate speech law.
Where I’m sure the disagreement lies is whether or not you find it morally okay to protect the current victims by less than ideal means.
To be fair, there is the entire concept of cancel culture which basically is all about organizing to harass people and is basically supported by every large platform.
Hate speech is often about saying other people ought not be able to express their ideas and opinions, and that the most effective way to bring about this result is for them to not be not alive any more.
Eliminationist rhetoric is a subset of hate speech overall, but it certainly exists and is trivially easy to discover. It's odd to me that none of the self-professed 'free speech absolutists' ever seems to engage with this point.
Most eliminationist rhetoric is still protected in the States under current precedent. It advocates unlawful action but without "imminency". It could have been forbidden under the old "clear and present danger" standard.
Forbidden eliminationist rhetoric is quite rare and would be something like the "cockroach" broadcasts in Rwanda.
All of this is irrelevant and besides the point, 'hate speech' is not a magical word you say when mean people on the internet say things you don't like.
If you can't prove material damage in a court, it should be allowed.
Hate speech always leads to further extremist behavior and death threats. Now, the US is very tolerant of hate speech in itself. The problem is haters are completely incapable of avoiding the next step wherein they call for the call for the deaths of those they hate. The very moment they do that I am perfectly fine with all of our existing laws on things like terroristic threats being wielded against those making the threats.
You have the right to speak, but you also have the right to repercussions, in specific when those actions are a call to harm.
> free speech as a philosophy is about allowing all speech, cause other it's just mostly free speech
Then that's a philosophy virtually no one actually holds.
Very few people think death threats, fraud, etc. fall under free speech. If you do your speech at 3am with a loudspeaker in a residential neighborhood, you're probably getting dinged for "disturbing the peace", because other people have rights too, and society winds up having to resolve the conflicts.
In this case, a similarly important right - freedom of association - also applies.
> Then that's a philosophy virtually no one actually holds.
correct. most people don't hold a free speech philosophy. people just like taking a high moral ground they don't actually have.
> Very few people think death threats, fraud, etc. fall under free speech
us govt can't prosecute death threats unless they can prove intent beyond you just saying it, they can't also arbitrarily prosecute for lying
> If you do your speech at 3am with a loudspeaker in a residential neighborhood, you're probably getting dinged for "disturbing the peace", because other people have rights too, and society winds up having to resolve the conflicts.
if you said the same thing at a much lower volume, no one would care. the problem there is noise pollution, not the content of the speech said.
I see that policy works extremely well in cases like <https://twitter.com/keffals/status/1566153033586810885>. As long as you give all the information necessary for someone interested to interact in a harmful way, it's fine, but you have to frame it in a way that doesn't suggest harassment. Just speculate about all the locations they could possibly be having lunch, and trust that nobody will harass them.
Spreading rumors about them and interacting with friends, family, and known associates is fair game. Also posting their public contact information is also fair game.
I presume users on kiwifarms (KF) use https://en.wikipedia.org/wiki/Parallel_construction techniques to publish private information elsewhere (such as private addresses) so that the information can then legally be reposted on KF. Coordination-of-information has an accomplice role in some of the illegal activities "reported" by KF.
I do think, if there was competent legal governance in this space, that's the conclusion they would have reached. I think you understand my larger point, regardless.
I think Cloudlfare’s choice to block them is fine and CF was probably fine allowing their use of the service before, given the damage to their reputation they apparently considered acceptable.
Historically, you needed money or influence or both to make a “bad” (or in this case, actually bad) message widely available. What we’re seeing with Cloudflare and other companies choosing not to do business with some people is like a correction a bit back toward the past, after an hard swing toward unchecked, potentially widespread reach of speakers who wouldn’t have been heard much before.
>Cloudflare is choosing between "platforming the bad guys" and "censoring free speech".
Unless they provide hosting services, this seems a little distorted. Cloudflare is a DDoS protection service, not a platform. For nearly as long as there have been laws, there has been a general understanding that even the worst of us are entitled to the protection of the laws. Even Bill Cosby was entitled to his Fifth Amendment rights when he was given immunity for his infamous testimony. I don't see why Cloudflare's role should be seen differently; they have become the online anti-DDoS police, in the face of an Internet woefully under-equipped to manage such attacks.
Only in the case of the Daily Stormer, who deliberately turned Cloudflare's neutral role against them by saying "Cloudflare supports us", does there seem to be an exception, because they can't pretend to be truly bound by the law. But calling this "platforming" is basically playing into the hands of people running DDoS attacks.
Kiwifarms's hosts platform them. Cloudflare protected them. The difference is important. I don't know what happened exactly, so I can't comment on it, but I'm interested to find out what happened over the last two days.
I feel the "not providing hosting services" argument doesn't really hold water. If the content is only accessible over the internet when I connect to Cloudflare, it sure feels like they are providing hosting. Sure, they only provide a proxy ... which is a copy of the content on their servers, which is hosting.
Obviously Cloudflare wouldn't be willing to provide the name of the company doing the actual hosting for very good reasons. However, this makes it impossible to make the hosting provider aware of what they are hosting. I don't think a lot of hosting providers want to willingly host neonazi sites, however when set up behind Cloudflare, it is quite likely they have no idea they are hosting neonazi sites to begin with.
If CF was "just" DDoS protection, it does seem quite reasonable that CF should not be obligated to do any moderation. However, the service they provide comes with quite a bit more: instant production-grade global web hosting (caching) infrastructure and ability to hide your backend infrastructure from the general public.
>If CF was "just" DDoS protection, it does seem quite reasonable that CF should not be obligated to do any moderation. However, the service they provide comes with quite a bit more: instant production-grade global web hosting (caching) infrastructure and ability to hide your backend infrastructure from the general public.
You're contradicting yourself here. Those services you described in the second sentence are what is necessary for DDoS protection. Likewise, when cops arrest me for throwing a paint ballon at Richard Spencer, it's not because they're acting as his personal security detail. It's because I broke the law.
>However, this makes it impossible to make the hosting provider aware of what they are hosting.
Again, this is simply harassment prevention. If the hosting provider wants to know what is on their service, they can just look. It's not like Cloudflare is providing an encrypted service to keep hosts from knowing what is on their servers. It's just preventing people from harassing the host about it. Law enforcement can walk right through Cloudflare if they want, it's vigilantes who are stymied.
>If the content is only accessible over the internet when I connect to Cloudflare
On the one hand, I would have supported Cloudflare in continuing to provide service to Kiwifarms as someone not employed there if that was their conviction.
On the other hand, If I were the CEO, Owner, whatever of Cloudflare I would have cut ties with kiwifarms a long time ago on the grounds the site promotes truly immoral and reprehensible content and I wouldn't want any resources I control going toward helping them do so for my own conscience to be at ease.
Not just immoral and reprehensible, the campaigns of targeted harassment they undertake limit the victims' speech. If you care about people being able to freely express themselves, today is a good day. I don't know why the free speech defenders miss this (I do know).
And the hosts. Server owners have rights not to host content they don't want to host, for any reason at all. Business rights which 'that side' conveniently forgets about when it suits them.
Yea - It's something I've seen brought up recently that really helped me think about these issues. Yes, annoying people or just insulting them is valid speech that I wouldn't necessarily trust a government to decide on the legality of, but It's important to balance between multiple speakers - just because someone is the loudest or most notable doesn't mean they automatically should have their right to speak be upheld the most. In this case, and in many others, the "free speech martyr" is explicitly engaged in speech meant to suppress others' ability to speak and express themselves.
That doesn't seem like something tech companies should be making judgements on. They are because the government is totally failing here. But if these sites are so dangerous that they need to be immediately shut down, the government should be giving a directive to do it.
Yes to all of the above. And I stand by my statement. All of those things can be true and I can still find the site reprehensible and wish to provide exactly 0 resources to assist them in way.
Let's not obscure things by calling it an issue of "hate speech." That is an impermissible broadening. As they said, "hard cases make bad law." The only way to mitigate the badness is to make the decision as narrow as possible.
It's about illegal threats of violence. Those were against the law long before anyone ever used the term "hate speech."
>It's about illegal threats of violence. Those were against the law long before anyone ever used the term "hate speech."
the illegal threats of violence are always removed as soon as possible from KF, just as they are on every other site. what exactly is the difference here?
edit: I'm rate limited; there is (or now, was) a point-by-point rebuttal to the "KF bullied people to suicide" claims on the front-page. tldr it's a false narrative, there's no evidence anyone killselfed because of their KF thread. would you like to know more? too bad, you can't, because the site is down so you can't read it.
the "counter" / "KF kill count" / etc is a running site joke; it's not a joke about actually bullying people to suicide, it's a joke about the unfounded reputation of the site itself; part of the punchline is that everyone in the in-group knows that the number is zero but the out-group thinks it's in the dozens. get it? well I guess it's not that funny when I explain the joke, but then no joke is, right?
The big counter celebrating the number of people they've harassed into committing suicide? People have gone to jail for it. CF should be the least of their concerns right now.
This is such a bad faith comparison and in no way related. Facebook hosts its own content/infrastructure. Cloudflare's DDoS protection service and Facebook as a whole are not related.
A more accurate claim would be, if someone makes a Facebook post containing an illegal threat of violence, they (Facebook) _do_ ban the account of who made a post containing illegal threats of violence.
> Facebook hosts its own content/infrastructure. Cloudflare's DDoS protection service and Facebook as a whole are not related.
Pretend for the sake of argument that Facebook did use Cloudflare, or that my example were about some other platform that does.
> A more accurate claim would be, if someone makes a Facebook post containing an illegal threat of violence, they (Facebook) _do_ ban the account of who made a post containing illegal threats of violence.
Exactly my point. When someone does something banworthy on Facebook, we let Facebook ban just that one person, rather than banning all of Facebook.
I don’t think they’ve done well, and I think we can say that regardless of our opinion of this choice unless you think they should never deny service.
The mistake they’re making is this: they’re treating each event like a unicorn. They need to consider the overall decision making process. What are the inputs? What are the outputs? And they need to make these transparent.
The failure to do this results in the CEO publicly regretting previous ad hoc decisions. It’s also bad for the Internet. If you need to maintain the option to remove a customer — and you do — you need to be clear, consistent, and transparent.
It’s similar to ransomware decisions. You don’t want to make a decision about paying or not paying ransomware while you’re under pressure. Stress damages your ability to make rational decisions. Write a playbook and use it as your base for decision making.
This was not a free speech issue and I suspect that some of the attempts to reframe it that way are deliberately muddying the waters.
The issue at hand is that Cloudflare was providing material support to terrorists.
The site at the center of all this wasn't merely being critical of a group of people, it was being used to gather and disseminate personal information and coordinate acts of terrorism. Cloudflare meanwhile is not a public utility and had absolutely no obligation to provide services to terrorists; that was a smokescreen meant to deflect criticism of their decision to do so.
Free speech absolutists should really consider whether their argument is being strengthened or weakened by this specific case before hitching their ideological wagon to it.
Is it censoring free speech when the goal of the speech is to actively harm people? I’m not sure of any nation that has no caveats to their idea of free speech
Is declining to participate by re-transmitting such speech even censorship? You can't force a company to take you as a customer, being a shit head isn't a protected class.
That's only a few providers and too expensive for anyone but someone using Google ads to fund their operations, because Cloudflare created that cost by its illegal actions.
Regardless of the behavior of the people at kiwifarms, I still find it odd that we have protected classes of people that are more equal than others. Everyone should receive the same rights.
Calls for acts of violence already hasn’t been legal. Hate speech is outside of that scope, otherwise we wouldn’t have another term for that (all calls for violence could be hate speech, but not all hate speech is calls for violence)
Therefore what is hate speech? Are words violence in and of themselves?
My interpretation of hate speech is that it attempts to "dehumanize" a category of people with malice.
Not a lawyer or a linguist, just Yet Another Internet Spectator.
Sometimes hate speech can be done with a smile and a calm voice, but it's still toxic. I'd posit that that kind of speech has been quite effective in ramping up the political divide and I only see it getting worse.
I recognize that real censorship is a dangerous thing, but would counter that there's a lot of speech that, while legal, should not be celebrated.
>While law enforcement in these areas are working to investigate what we and others reported, unfortunately the process is moving more slowly than the escalating risk.
I wonder if their evaluation of the "escalating risk" had anything to do with the legal standard of imminent action is. Probably not.
Is it really though? If so, then more than half the rappers would of been locked up for this by now.
I suppose who ever put such a law in US court system is holding their breath for the guy who got on International TV / web streaming / a huge crowd of people in Ferguson (?) and said 'burn this bitch down' - burn this mutherfucker down.. [1]
I dunno, maybe he was arrested for this and it is a real thing, if so I missed the news about it.
Or maybe it's technically possible to have to go to court for such a thing, but maybe only a few ever have, and the outcome of such a thing is anyone's guess, even those really into speech law [2]
GP was likely aware of this but didn’t explicitly state why imminence was important.
Where does it specifically say in the US Constitution that you’re allowed to incite violence? :-)
Usually the answer here is going to be someone cites the 1st Amendment, and a persons right to free speech. From that we have Brandenburg vs. Ohio, and also then Hess vs. Indiana, and subsequent cases which use those precedents from the Supreme Court, these hold that 1A protection does disappear where someone is calling for “imminent violence”.
Many of the internet hellholes hiding behind Cloudflare have significant quantities of unmoderated and extreme discourse where participants do call for imminent violence against another party and that is not 1A protected behavior.
> true threats and "incitement to imminent lawless action" are separate doctrines
Wasn't aware, thank you. That said, this message [1] sounds an awful lot like "the speaker mean[t] to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual" [2].
Aside: And with that tweet read, I'm out. Happy Labor Day weekend folks.
> Where does it specifically say in the US Constitution that you’re allowed to incite violence? :-)
You mean lawless violence. :-)
I would bet calls for killing some private citizen (e.g. agitating, “kill Rodney Dangerfield”) would not survive as protected speech in the courts anymore.
>Many of the internet hellholes hiding behind Cloudflare have significant quantities of unmoderated and extreme discourse where participants do call for imminent violence against another party and that is not 1A protected behavior.
Which websites are you referring to, in what numbers are you talking about, and how are you determining that those calls for violence are imminent? Wouldn't that suggest that a lot of violence has already occurred stemming from those websites? (presumably not stopped by a slow legal system like Cloudflare implies would have happened in this case)
There are plenty of laws that prohibit speech used that is a call for violence.
>Under Texas law, any threat of violence to either person or property can be the basis of a terroristic threat charge. However, that threat of violence must be accompanied with criminal intent to either follow through with the threat or terrify another into believing you may do so. There are six specific types of intent covered by Texas law, and the prosecutor only needs to prove you had one of them to obtain a conviction.
cause a reaction of any type to his threat by an official or volunteer agency organized to deal with emergencies;
place any person in fear of imminent serious bodily injury;
prevent or interrupt the occupation or use of a building, room, place of assembly, place to which the public has access, place of employment or occupation, aircraft, automobile, or other forms of conveyance, or other public places;
cause impairment or interruption of public communications, public transportation, public water, gas, or power supply or other public services;
place the public or a substantial group of the public in fear of serious bodily injury; or
influence the conduct or activities of a branch or agency of the federal government, the state, or a political subdivision of the state.
> unless that call for violence is intended to produce an imminent lawless action
So it’s fine to call for violence, as long as the violence in question would be legal if it were acted upon?
That makes so much sense, it seems like it would go without saying. If the violent act itself was legal (like a war, or an organized boxing match), why wouldn’t it be legal to solicit or petition for it?
>So it’s fine to call for violence, as long as the violence in question would be legal if it were acted upon?
No. It's fine to call for violence as long as your call is not designed to provoke and cause imminent lawless action. Brandenburg advocated for "revengeance" against the government if their demands were not met, and that was protected speech. Hess v. Indiana also affirms that advocating for lawless action is protected speech.
The point is more - you can legally say "kill Joe Biden" on the internet, but it becomes illegal if you're saying it to someone holding a gun to Joe Biden's head, who then fires it.
That made sense to me. Basic services, transit, blocking ddos… little, if any moderation.
Hosting content, more moderation.
I might strongly disagree with someone and I sure as hell won’t host their BS, but I still think some basic level of rights/ services should be provided.
I'm not sure I understand the distinction about why providing a CDN is fundamentally and completely different from hosting. Still coming off your servers eierher way.
It feels like they’re trying to construct a distinction here that allows them to continue providing web services to illegal/immoral content. And this isn’t the first time, they’ve told patreon to pound save over pirates using their cdn too.
“It’s not actually hosted just a CDN” is the weakest of these though. Like wow that’s splitting a fine fine hair, for what I can’t really see as any particularly great underlying reason or principle.
And if the principle is free speech… why not host it too? I just don’t see the logic here.
I think the internet is just irrecoverably broken in a way such that technical problems like DDoS or NN escalate to social problems. We should not even be having these discussions in the first place: It should be infeasible for attackers to conduct DDoS. It should be infeasible for ISPs to surveil their users. The internet as we know it was designed to facilitate communications between non-antagonistic peers, that design is no longer suitable for use by democratic society at large.
> Visitors to any of the Kiwifarms sites that use any of Cloudflare's services will see a Cloudflare block page and a link to this post.
Cloudflare was providing security from DDoS attacks. Then all the sudden they arbitrarily decided to hijack their domain. It would be one thing to stop providing protection. It’s another to say “no you see our content now”.
It would be like security at an event deciding to put in a band no one paid for. But still taking the money from the people hosting the event. The attendees are upset, the venue is upset, the original bands are upset.
Pretty sure that is a breach of contract. Feel free to drop them, but redirecting is wtf. Particularly, when they may be interfering with an investigation (as they said, cloudflare already took it upon themselves to involve law enforcement- who didn’t feel it necessary to shut it down).
“We may at our sole discretion suspend or terminate your access to the Websites and/or Online Services at any time, with or without notice for any reason or no reason at all. We also reserve the right to modify or discontinue the Websites and/or Online Services at any time (including, without limitation, by limiting or discontinuing certain features of the Websites and/or Online Services) without notice to you. We will have no liability whatsoever on account of any change to the Websites and/or Online Services or any suspension or termination of your access to or use of the Websites and/or Online Services.”
Kiwifarms controls their DNS; they can change NS records as needed, so I wouldn’t say the domain is hijacked.
> Kiwifarms controls their DNS; they can change NS records as needed, so I wouldn’t say the domain is hijacked.
While I agree in part, the DDoS protection isn’t meant to serve alternative pages per-se. It’s meant to mitigate hostile actors by making them check a box or something.
It would be one thing to take it down (terms clearly make that okay); but directing to alternative content I see as a possible breach.
>I truly hope that those unsatisfied with this outcome (which I suspect will be literally everybody) can take this as an opportunity to go help pressure their respective governments to figure out what the hell should be done, systematically, about hate speech on the internet. It's only 25 years overdue at this point.
I see what you mean and that sounds nice but how would that work? With the internet being international I can't imagine what could be done really. What KiwiFarm is hosting is already illegal in many jurisdictions I'm sure, but as long as the servers are hosted in some country with lax regulation (or a poorly implemented one) then what can be done at the state level?
Well this particular example is a US website. The relevant (inadequate) legal framework for handling the situation is US Federal law, which ideally Cloudflare would have had to reference to determine whichever outcome should have happened here. So, while I'm far from an expert on policy, I imagine that'd be the place to start?
Is it? I thought that it was hosted outside of the US, and that while the admin was an American citizen he didn't live on US soil. That's from vague memories from years ago though, so maybe not accurate or up to date.
Although I guess as a European I don't know if I really trust the USA to do a good job fighting hate speech. We have a pretty different take on that over here.
Cloudflare has a really clear and seemingly mandatory option which is to just assert what we all know to be true: They can boot clients whenever they threaten their business. Everybody does this. There are near constant stories of users getting business-critical Google or Apple accounts suspended without warning or explanation. Those companies are ruthless about protecting themselves from liability and will err on the side of losing customers even when the violations aren't proven. Cloudflare wants to be seen as being above the fray and they just absolutely aren't. Anyone who thinks "it's just politics" is delusional. These are real people doing real things with real consequences.
It’s a fundamental issue though — there’s no “figuring it out” that a government can do that won’t either censor or facilitate. 25 years has been long enough to find tactical policy changes that make it easier, but there aren’t any, which is why nothing has happened. The choice we have to make is either de-shrine free speech above all else or entrench hatred, and it’s bogus that we haven’t picked the thing that doesn’t kill people yet.
Practical dichotomy — that’s why this thread exists. You either platform it or you don’t, and you’re either legislated to do so or not. What middle ground do you see that allows this degree of free speech without platforming hate?
> either platform it or you don’t, and you’re either legislated to do so or not. What middle ground do you see that allows this degree of free speech without platforming hate?
Nobody has been legislated to do anything here. Cloudflare is dumping Kiwifarms. There are other hosts. That's one middle ground: a diversity of opinions on what constitutes unacceptable speech. Here's another: the government has no right shutting down Kiwifarms in the absence of a true threat [1], but Cloudflare is free to.
If you look at the history of communication technologies, particularly public ones (e.g. the printing press and television), this pattern recapitulates. An idealistic explosion of creativity. Weaponisation. Scrambling alongside states over-reacting. Then a middle ground.
We don't have anything close to consensus on the Internet, save perhaps for X-rated content. So private actors are figuring things out. We'll probably see a government response carving out protections for both speech and platforms, though hopefully nothing as onerous as what was done with TV [2]. And then over decades an equilibrium will arise. An equilibrium between "de-shrining free speech" and "entrench[ing]hatred."
The middle ground that already exists - the right of free speech doesn't guarantee an audience, and the right to assembly doesn't guarantee a platform. Censorship is permitted within the marketplace of ideas as an inevitable consequence of the fact that coerced speech cannot be considered free, but the government is far more limited.
If Kiwifarms wants to continue "this degree of free speech" it's up to them to find someone willing to tolerate their bullshit, and then to not step over the line, as they apparently just did with Cloudflare.
It's odd how situational people are about when free speech requires someone to be given a platform and when it doesn't. There's an almost impressive 180 on free speech rolled up in this philosophy, and it's a philosophy that I'm unfortunately seeing online more and more nowadays.
The idea is that the government has the power to censor private institutions and public schools as it sees fit, but private companies have no right to censor or exercise their own freedom of association. Actively harassing people online is free speech that people should just ignore, but trans people merely existing publicly and openly in public spaces is dangerous propaganda that the government needs to put a stop to -- merely being open about their own existence is crossing the line. It's a philosophy that's happy to censor identity, and loathe to censor actions. It's a philosophy that sees government involvement in speech as fine, and private/social involvement in speech as an existential threat to the 1st Amendment.
Free speech is used as the justification for these policies and arguments, but it's only a justification. The actual goal is the reinforcement of existing social norms and hierarchies, and free speech is applied situationally in order to further that goal.
In general, be suspicious of anyone who claims to be a free speech advocate who has this kind of backwards view of the 1st Amendment. The point of the 1st Amendment isn't to make it easier for the government to censor, and it isn't to make it harder for private institutions to moderate their own spaces. That's not to say that we can't talk about the free speech implications of moderation decisions -- but if you're excusing government censorship while criticizing companies, I immediately get real suspicious.
> The choice we have to make is either de-shrine free speech above all else or entrench hatred, and it’s bogus that we haven’t picked the thing that doesn’t kill people yet.
Most of us never enshrined free speech above all else. It was never controversial that free speech had limits, that sites had the right to moderate content and ban accounts, or that businesses could refuse service to anyone. Prior to 2016, something like this would not have even been newsworthy.
Painting this issue as black and white is just wrong. Both sides have immense ramifications for the world. Accountability for censoring bodies and people on these platforms is not easily solved.
> Given that they should never be in this position, Cloudflare is choosing between "platforming the bad guys" and "censoring free speech". They have navigated this imperfectly, but have done better than most would, I think.
They chose what's more convenient for them, as a private company, since the stock had a bad response after their previous statement.
They are, after all, a company that has to responds to their shareholders.
Why shouldn't we prefer private parties administer the free speech which is most fitting for their platform, as opposed to overly broad legislation at the national level by a government which doesn't appear like they'll catch up on tech within 5 years?
Pressuring government does not mean that the government will suddenly develop technical expertise. Even if the "right people" are voted into government at every level possible in a simultaneous magical moment, it would still take years for the government to develop its own internal consensus as to the state of problems and solutions & to slowly develop a workforce to administer technical policy. But one must question as to whether this is even in the cards for your respective nation.
In the meantime, if we prefer companies deal with the matter, people who don't like how things are done at least have the mere theoretical possibility of going it another way, assuming your market isn't so unhealthy as to only permit one entity (in which case you have a problem which a non-technical government may be able to deal with). The government issues monolithic force-backed policy, whereas the free market can create a diverse product ecosystem for different kinds of people.
And isn't the authority which is exercised by companies one which is ultimately quite fundamental — the freedom of association? The freedom to not have relations with those you don't want to talk to? Everyone should be free to yell their message on public property, but people should also be free to withdraw from each other if they no longer wish to be related. It is questionable to say that free speech must hinge on whether one party wishes to be related to another, especially when that other party has to maintain their services via expensive engineers.
> There were no good choices for Cloudflare here ...
I couldn't disagree more. Just cut off sites for organized harassment and nazis immediately. If you're able to, hand over any archives you have to law enforcement. Don't even talk about it. Just let such sites disappear one day. Don't give them any more attention.
Could you be breaching a contract? Maybe. But who cares? You think the KF owners are going to reveal themselves and sue? Let them.
This is what I find infuriating about the US media and many people in general: there's way too much effort spent on trying to appear neutral by bothsidesing every issue.
If CNN existed in 1938 Germany, after Kristallnacht [1], CNN talking heads would've gone on the air and said "sure a lot of Jews were killed, their homes and businesses ransacked and the authorities looked on without intervening but the perpetrators say they asked for it. Let's hear what this spokesman has to say."
So where was that on the website? Honestly you should be applauding that they actually switched from final solutions (obvious edgy joke) to a normal name in what seems to be like a reaction to the 2017 Charlottesville rally.
>Private companies should not be the de facto moderators of free speech in our society. They are forced into that position by woefully inadequate governance by legal authorities operating multiple decades behind the current landscape.
When you have an algorithm that suggests things to people, you are a de facto publisher and whatever you do, you're choosing what to promote. In that case you have a responsibility to choose wisely, though it is a hard problem. Hard enough that in many cases algorithmic suggestions need to be avoided.
When you are a specialist with few customers there's no problem with picking who you work with.
When you provide infrastructure for large numbers of organizations though, you must be very hesitant to moderate who you serve, for many reasons. For the most part, if what you're serving doesn't break laws in jurisdictions you respect, they should be left to operate as they will. There is a narrow band around that of "maybe you should, maybe you shouldn't". There are real problems with expanding this to moderate the topic of shouting for the day.
Exactly, and that's the problem: they can't have it both ways. They can't claim they are a common carrier while simultaneously deplatforming entire websites no matter what the justifications. The correct answer is for people who feel they've been wronged to file lawsuits. We must stop this extrajudicial form of justice-seeking; it will only end in death.
I see a pattern that everything requiring international collaboration is very very slow to fix. From tax havens to climate change. Because you can hide out in another country or blame another country and so on.
The notion that government should be in charge of effectively eliminating speech we don't like so that private companies don't have to is _far worse_ than the current state of things.
The problem is who defines what is hate - don't trust a govt to make that - they are the last people I would trust.
We have no solution in this age - it was easy in the older days when consensus was reached within a village on what was bad for the community and you got either got tarred and feathered or thrown out.
I am honestly dumbfounded about the government hate on HN. It's elected by you and your peers. You can influence it and you can even be part of it yourself. If you want change do it and stop spreading FUD.
Our city council recently replaced all the street lights with new LED lights - one of our neighbors is convinced it is 5G - this is the same person who votes as well.
Govt is generally a low quality effort until it comes around to election time and then carefully crafted slogans and media and the majority of voters fall for it every time.
Democracy is best system we have but sometimes the outcomes is less than desirable.
Right. Centralization of morals and ethics simply causes mass polarization and, eventually and inevitably, war over "which side wins." Those in favor of the rational gray area will be trampled on by both sides.
I'm basically happy right now with private companies being the de facto moderator of speech because your alternative of government censorship is unacceptably risky to people's freedoms, and private companies are doing a decent enough job at it as is.
> Private companies should not be the de facto moderators of free speech in our society. They are forced into that position by woefully inadequate governance by legal authorities
I feel frustrated by this debate; do people really believe that having every speech question litigated in court is good for freedom of expression online?
My take has been for a while now that having multiple layers of enforcement for rules is a good thing because it allows for flexibility. It's bad for us to have only two categories for speech:
- morally obligated to host without question.
- will get you hauled in front of a judge.
The actual outcome of bigger companies like Cloudflare, Facebook, etc... pushing more of their moderation decisions onto the government is that the government will be doing a lot more moderating, and governments tend to be pretty clumsy about that, and court systems tend to be slow (for good reason, they have safety precautions because prosecuting someone is serious business), and laws tend to be very reactive and either overbroad or out-of-date, and they don't tend to take into account niche communities with special needs.
But beyond all of that, the law is also just a harsh thing to fall afoul of.
I just don't understand how someone can say, "make it easier to prosecute people for speech" and treat that like the pro-speech position. Isn't it better when communities and industries can have lower-stakes moderation decisions that aren't going to end up with someone being thrown in prison or fined? "The government should handle moderation" is exactly how we end up with bills like SESTA/FOSTA.
----
> Given that they should never be in this position
I also feel weird about this line. There are a lot of tech people who are fine with critical infrastructure being fully privatized, but draw a line at that infrastructure making its own decisions about moderation. If Cloudflare believes its services are de-facto public infrastructure, then why is Cloudflare a private company?
I feel like a lot of tech people want to have their cake and eat it too. They want to have a private company to be able to throw its weight behind technical decisions and infrastructure decisions and to shape the entire market, but they don't want any responsibility that might go along with that. My take is that if you don't feel responsible enough to be in the position that you're in, then get out of that position.
More and more, I realize that there is a difference between choosing not to abuse power and putting yourself in a position where you can't abuse power. Cloudflare makes a lot of excuses about how scared it is of abusing power, but what is it actually doing to reduce the amount of power it has over the Internet? If Cloudflare is saying that it shouldn't be making decisions about which services can get free CDNs, then that is tacitly saying that its specific CDNs and DDOS protection services are so powerful that they're essential to the modern web. If they're so powerful that Cloudflare wants to be completely hands-off about access to these services -- well, that prompts the question, "is it good for that kind of power regardless of the speech implications to be in the hands of private companies?"
Because Cloudflare has the ability to shape a lot more than just speech, it is in a privileged position to make decisions about core Internet infrastructure, and if its owners genuinely believe that they're not capable of making those decisions, then the irresponsible decision here is not in how they exercise that power, the irresponsible part is them holding onto that power and continuing to expand their marketshare and centralize that power even though they don't think they (or anyone else) is fit to wield it.
----
> Cloudflare is choosing between "platforming the bad guys" and "censoring free speech".
This point has gotten raised before by other people, but I just want to remind everyone that a nontrivial part of Cloudflare's business model is dedicated to censoring network requests. Cloudflare draws a line between "speech" and "abuse" every single time that it intercepts and blocks a DoS attack and every single time that it classifies IP ranges as dangerous or safe. It doesn't demand a court order in order to block clients from accessing a website if it thinks that those clients are contributing to a targeted attack. It doesn't rely on the government to tell it what is and isn't malicious web traffic.
A big part of the argument against Kiwi Farms was that the site wasn't just hateful or bigoted, it was actively abusing infrastructure and targeting individuals in the real world. I think compressing all of this down into a single "bad guys" category is oversimplifying the issue. I personally feel that Kiwi Farms has more in common with a malware or a DDOS-for-hire site than it does with a political site.
Of course, Cloudflare also provides services for multiple DoS-for-hire sites, and I have the same criticisms there. Is Cloudflare free-speech absolutist about this or not? Because if Cloudflare is arguing it has a moral duty to make sure that DoS sites stay online, then it's not immediately clear to me how it justifies its own business model. I honestly feel like there's a real lack of critical thought and real deliberation from Cloudflare recently about speech. I don't think the Kiwi Farms decision was a particularly complicated one, Cloudflare is a service that is in part dedicated to making it harder to knock people offline. That they don't seem to see any parallels between their services and their moderation decisions, and that they don't seem to realize that they are in fact in the business of censorship -- is concerning to me.
It feeds into the point above about why Cloudflare feels so comfortable being in charge of this much Internet traffic given its fears about censorship. Does Cloudflare not realize that it has a tremendous amount of power and is in fact directly shaping the kinds of expression and speech and services that people can build online regardless of what its moderation decisions are? Any company that's reaching the stage where they're scared about deplatforming a doxing site should be scared about a lot more of their powers than just moderation.
> I feel frustrated by this debate; do people really believe that having every speech question litigated in court is good for freedom of expression online?
Yes, of course it would be. The end result of litigating free speech in the court system, is that the court system would rule in favor of the speech, almost every single time.
The courts have extremely strong protections for speech. They are way way stronger than what private companies do.
Just adding the word "government" doesn't make something more scary. In this case, adding "government" to the enforcement mechanism for speech would mean that basically nothing gets banned.
> is that the court system would rule in favor of the speech, almost every single time
This is kind of a run-around. Cloudflare's blogpost argues that it wants to get this content offline, it just wants courts to be in charge of that process. It acted because it believed the courts were too slow.
If your argument is that moving speech to the government level is good specifically because the government won't censor it, then don't pretend that we're arguing about where moderation should occur. The actual argument in that case is whether you want this moderation to happen at all.
I take Cloudflare at their word that they actually wanted a court to tell them to deplatform this content. And because of that I take them at their word that they believe the government could feasibly pass laws that would censor the content people are asking them to deplatform.
> If your argument is that moving speech to the government level is good specifically because the government won't censor it, then don't pretend that we're arguing about where moderation should occur.
You said the following: "do people really believe that having every speech question litigated in court is good for freedom of expression online"
That was your statement. And I responded to your topic and statement that you brought up.
Yes, it is descriptively true that if courts are the ones to handle free speech issues, then yes that would result in more protections for free speech.
If you disagree with that, and think that there should be less free speech protections online, feel free to argue that.
But the original statement that you made, was about would speech be more or less protected, than if rando private companies on the internet, were the one's in charge of what speech people are or are not allowed to make on the internet.
> they actually wanted a court to tell them
You are confusing a few issues. There is an outcome, and a process.
A process can still be important to go through, regardless of the outcome.
The whole point of the court system is to have checks and balances.
That is what people want, when they advocate for the court system to look at an issue. It is about the process. It is about saying "if something is arguably so dangerous, that you think it should go down, then it is important to have checks and balances, and that is why we put the court in charge of it".
Because if we don't have a process, or the process is bad, then this can effect other speech situations in the future.
For all we know, cloudflare is now going to have significantly increased pressure to take down human rights website because of this, and if this current takedown had instead gone through the government, then that pressure wouldn't have happened.
This is why process can be important, regardless of the immediate outcome of the in the news issue of the day.
There's a context here, comments don't exist in isolation.
> That is what people want, when they advocate for the court system to look at an issue. It is about the process. It is about saying "if something is arguably so dangerous, that you think it should go down, then it is important to have checks and balances, and that is why we put the court in charge of it".
I've no doubt people believe this, but I don't think this is an accurate summation of Cloudflare's blogpost. Cloudflare is pretty clear that they wanted a court not just to tell them what to do, but to tell them to take the content down. They eventually moved on their own not because of a lack of guidance, but because they believed the court process was insufficient and slow. I don't see any reading of their post that they were hoping a court would tell them to leave the content up.
And certainly Icathan is not advocating that the courts should leave the speech up. In their words:
> I truly hope that those unsatisfied with this outcome (which I suspect will be literally everybody) can take this as an opportunity to go help pressure their respective governments to figure out what the hell should be done, systematically, about hate speech on the internet. It's only 25 years overdue at this point.
> Cloudflare is choosing between "platforming the bad guys" and "censoring free speech"
Talking about free speech and censoring in this cases is what people that are ok with the harassment do. Don't. They should just have a normal TOS like everyone else and apply it. The freedom of speech should not be invoked in this case.
i think this is the sort of thing that exposes a general failure of capitalism.
the underlying assumption of capitalism is that competition works, and one company doesn't have the power to do something like censor a website, because competition will solve that. instances like this prove that to be wrong. cloudflare (and google, amazon, and most other big companies) get put in this position because regulators insist on pretending our economic system is pure capitalism. but in most cases, the big players are much more of a monopoly than anybody would like to think, and the forces of competition are a farce.
FWIW i think kiwifarms should be censored. but it sucks for cloudflare that they have to be the one to make that decision.
Of course. That's what courts are in the USA -- they determine which speech is protected as free speech, and which is not (eg defamation, fraud, perjury, copyright infringement, threats, etc).
But as CF has noted, the wheels of justice are too slow for the speed of internet, so they had to act proactively.
Copyright holders already noted the problem and got the DMCA made to make an internet-speed version of copyright enforcement. The remaining laws governing speech have a lot of catching up to do.
> But as CF has noted, the wheels of justice are too slow for the speed of internet, so they had to act proactively.
Sometimes these things are slow for a reason. They have no idea whether those threats were legitimate or posted by the very group of people that were making a huge fuss about KF the past few weeks. Anyone can post a threat anonymously and then report it themselves.
The wheels of justice are slow because they don't just take everything at face value, which CloudFlare just did.
It's not about speed. It's that in courts of law there's something called "due process" -- important safeguards designed to prevent injustice, e.g. the right of the accused to defend himself or herself.
The court of public opinion runs on emotion, with a loud enough megaphone accusers don't need to prove anything, and publicly traded companies are slaves to bad PR.
And yet a fast system was invented for copyright protection online. Was the DMCA wrong?
And could the legal system handle it if it was responsible for acting on every case of online copyright infringement through formal due process? What about every case of defamation? Of every threat?
The internet has set information free, and allowed every person to contribute to the information superhighway. But that means that torts and crimes that used to be rare and manageable by the courts are now anonymous and decentralized and democratized in such a way that an endless number of people can do them. Leaving it to the courts is asking us to empty a river with a thimble. It was unacceptable for copyright, hence the DMCA.
Do only copyright holders deserve that kind of protection?
>I truly hope that those unsatisfied with this outcome (which I suspect will be literally everybody) can take this as an opportunity to go help pressure their respective governments to figure out what the hell should be done, systematically, about hate speech on the internet. It's only 25 years overdue at this point.
I can unfortunatley see goverments doing a China style ID required for internet sites....
> figure out what the hell should be done, systematically, about hate speech on the internet.
I honestly wish there were some organization focused on the causes of hate speech rather than censoring hate speech.
What caused this? Why are Kiwifarm users so hateful? One does not just hate out of the blue, especially not to the degree of the actions they've taken (judging from their Wikipedia article).
Then there's the other end of this: To walk a thorny world, don't pave the world, wear sandals.
How can anyone be harassed online to end their life? Were there not enough settings, blocklists, and such to keep the harassment away? Were they unable to access the services that would have helped them better handle the harassment that did get through?
> What caused this? Why are Kiwifarm users so hateful? One does not just hate out of the blue, especially not to the degree of the actions they've taken (judging from their Wikipedia article).
A lack of moderation to remove hate. Hate breeds hate, and drives good people away. In a similar way that bullshit breeds more bullshit unless removed.
Let me try an analogy for technical people.
Say you're a person deeply knowledgeable about computers and technology, and you're posting in an audio related forum.
Somebody posts a glowing review of an expensive device that claims that shaving the edge of a CD and painting the border with a marker will give you a bigger soundstage, more clarity and make the audio sound crisper: https://www.youtube.com/watch?v=f-QxLAxwxkM
You can try to explain why it's bullshit, but it's hard. You have to go into details about how a CD actually works, why this BS about reflections has no effect on a CD mechanism, how error correction works... you'll have to write several pages of deeply technical information that you have to boil down to something understandable to mere humans. It's a tough job. Not only you need technical understanding, but you also need to be good with words, and good at explaining complex concepts simply. And you have to have the time and dedication to spend an hour or two writing about it for free. That's a lot of unusual characteristics for a single person to have.
Then somebody goes "shut up nerd, it sounds better!" in response. And they proceed to post more reviews of volume knobs that somehow improve the sound because the wood is special, gold plated optical cables, and other such junk.
It takes a whole lot more effort to provide good information on a complex matter, and virtually none to spout bullshit. So eventually the smart people will get fed up and leave. Especially because they can find places where they're appreciated -- they'll find a home on a more specialized home where their expertise is actually valued. Meanwhile the original forum will get even more BS.
Same happens with social topics. It's easy to spread conspiracy theories and hate. It's hard to explain complex social issues. Without moderation the first will trivially overwhelm the second.
Well, so far you've almost proven my point. You've responded with a trivial insult and no actual counter-argument, which was far easier to write than my comment and contributed nothing to the discussion.
> they were - or at least Moon - always careful about not crossing the legal line
No clue. We likely won't know until law enforcement's investigations are over. In the meantime, everyone is free to come to their own threshold. That's essentially what's going on here. If it turns out Cloudflare erred, they'd have to show the evidence that pushed them to take such an extraordinary step to regain trust.
Private companies should not be the de facto moderators of free speech in our society. They are forced into that position by woefully inadequate governance by legal authorities operating multiple decades behind the current landscape.
Given that they should never be in this position, Cloudflare is choosing between "platforming the bad guys" and "censoring free speech". They have navigated this imperfectly, but have done better than most would, I think.
I truly hope that those unsatisfied with this outcome (which I suspect will be literally everybody) can take this as an opportunity to go help pressure their respective governments to figure out what the hell should be done, systematically, about hate speech on the internet. It's only 25 years overdue at this point.