> X Corp. argued that AB 587 violates the First Amendment by compelling "companies like X Corp. to engage in speech against their will" and "impermissibly" interfering "with the constitutionally protected editorial judgments of companies."
Ok genuine question there: are companies considered the same as people when it comes to US Constitution? Does a company have free speech and the right to bear arms?
Courts have said: Companies are just groups of people. So a person, employed by Twitter, has the same free speech rights as anyone else. If you force a company to take certain actions, a person in the company must perform that action... and forcing that person to take certain actions may be a violation of that person's rights.
So companies being required to file for taxes and put nutrition facts on their food products violates the rights of certain people within those companies?
In particular, the Zauderer test allows the state to compel commercial speech if doing so is "reasonably related to the State's interest in preventing deception of customers[1]." It's obviously up for debate, but I think one could make a strong argument that Twitter's content policies pass this test.
(More generally, however: there's lots of compelled commercial speech that doesn't fit into this flowchart. Taxes, commercial permits, leases, etc. I think one could make a strong argument that this law is strictly logistical in nature and represents no more of a 1A risk than Twitter's commercial leases do.)
It's a voluntary arrangement that they can leave. Certain activities come with responsibilities and restrictions, *in the interest of protecting everyone else's rights.
No our constitutional rights are not predicated on our actions or speech having no impact on society. On the contrary our rights are so that we may impact society as we see fit.
This is such a dangerous line of thought I almost don’t believe it.
Users original comment mentioned impact on society.
This is word for word talking points of the recent NM gov gun deceleration.
User also modified his original comment from “impact on society” to nothing to “in the interest of others rights”. Also all word for word with the current blitz on constitutional rights.
I guess to anyone who hadn't had experience with moderating a popular social media platform or talked to anyone who has. Was that really so shocking?
> it was in fact unconstitutional
Nice of the government to step up to the plate and give those of us who've been on the corporate side of this some guidance, for once. Most of what companies get from Congress and the Court is radio silence on the topic (ironically, I suspect, so the government isn't credibly accused of violating a corporation's First Amendment rights by telling them how they can and cannot moderate). So it's nice for the courts to step up and tell companies that the thing the executive said they had to do, no, they don't have to do; that'll be helpful moving forward.
Yes, I saw. It's nice for the Courts to finally give some guidance to corporations on when they should and should not follow the signal from the Executive; without it, they're pretty much on their own to guess at the Constitutionality of requests or demands.
> the most recent decision that it was in fact unconstitutional.
This is not really what the decision states. The government can request all it wants but it cannot partake in "threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech".
I'm personally OK with the government requesting things to be moderated; I'm not OK with the aforementioned methods if the request isn't backed by law.
Let me warm up some popcorn for when Twitter tries to claim they cannot lose their business license because the State has no power to compel them to file their annual report since doing so violates their employee's 1A rights.
So how do you jail a corporation when they comitted a crime?
If you don't they are not a person.
If that discrepancy doesn't feel unjust to you should do some soul searching maybe and maybe look at other places where corporation profit from their "personhood" without ever having to experience the negatives of actually being one.
I didn't come up with the totally bonkers idea of declaring some organizational entity a person, so don't expect me to defend the logical conclusions stemming from it.
X or musk's will? If X has a "say" on a "speech platform" that's a major power imbalance.
Not that X's speech is this or that, but that it shouldn't exist on it's own. Musk or anyone may speak on behalf of X, but if there is no "on behalf of" there should be no speech there.
Speech as in, to put forth opinion, ideology, values or anything beyond simply being silent and letting everyone else (users, which includes those who may speak on behalf of) speak.
All X has to do is make controversial content decisions the same way the other social media companies do: at the behest of our intelligence agencies and opaque government-funded "partners"!
I can't tell if you are kidding in thinking that's what has been happening. The twitter files turned out to be a kind of huge exaggeration. The govt wasn't really controlling twitter, there was a big exaggeration of what happened.
> The twitter files turned out to be a kind of huge exaggeration.
"Twitter Files" was hyped as ushering an new era of radical transparency on Twitter moderation. If you take that on face value, it is ironic Twitter is now refusing to be transparent about reporting its moderation (or lack thereof).
If you do not take it on face value, then it appears to have been a score-settling exercise motivated by animus against the previous management.
It was enough control for a judge to outright bar a number of government agencies and people from communicating with any social media company on moderation matters, and then another judge to uphold that bar.
Technically it was a panel of 3 5th circuit judges. This is the same circuit which believes that for government to, in any way, "induce" a social media platform into negatively affecting the reach of user content, is likely a violation of the first amendment. This would presumably include merely calling out a post and essentially saying "hey I think this violates your policies, could you take a look?". Simultaneously, they believe it's a-ok for the government of Texas to expressly dictate social media moderation policies via legislation. When conservative-aligned plaintiffs bring lawsuits in the 5th circuit, they are able to win at each level of the federal court system without ever having to convince a single person that isn't politically aligned with them.
The requirements for an injunction are much lower than an actual ruling.
The party must just show that they have a possibility of winning the case and that granting temporary relief will not cause additional harm to the plaintiff.
To use that as proof that the defendant will win is ridiculous.
Injunctions are not handed out willy-nilly, and the actual wording in the injunction should give you pause:
"The officials have engaged in a broad pressure campaign designed to coerce social-media companies into suppressing speakers, viewpoints, and content disfavored by the government"
Nobody said anything about "proof that the defendant will win". I said said that several judges have found evidence of unconstitutional pressure being applied. Please do not misrepresent a plain statement of fact.
Incorrect. The Fifth Circuit upheld the injunction. They found, specifically, that the officials from the FBI, White House, and CDC likely violated the First Amendment.
>As explained in Part IV above, the district court erred in finding that
the NIAID Officials, CISA Officials, and State Department Officials likely
violated Plaintiffs’ First Amendment rights. So, we exclude those parties
from the injunction. Accordingly, the term “Defendants” as used in this
modified provision is defined to mean only the following entities and officials
included in the original injunction:
[Followed up by a page and 1/4 of people and agencies who the injunction still applies to]
In no way is it honest to describe this as "overturned".
This is myopically true while ignoring the large amount of influence coming from the business world itself, which inevitably just creates a partisan flamewar. Tying all the badness up in a tidy bag labeled "government" is just another method of control.
No matter where you stand, I think it's good that this is happening. The CA law might be overreaching whilst Twitter might be too lax. Hopefully a head-to-head brings clarity on an important topic.
With the CA law, EU regulation, app store policies and the like, a pretty powerful net is cast around the topic of speech. Specifically about speech that is technically legal yet considered unwelcome.
It's tempting to let judgement be clouded by a hate for Musk, but it's a topic worthy to think about more deeply beyond just X.
It's also revealing how this legislation only targets Big Tech. From a pragmatic point of view, this makes total sense. But it also shows that this legislation isn't based on first principles. It's a panicky patch on an open wound, not the definitive say on free speech.
They're required to spell out their moderation policy and provide details on how it's enforced, but the law does not appear to put constraints on the policy. So it would seem to be consistent with the law to have a policy like
We moderate depending on our values and feelings as we experience them at the time, and we do not strive for consistency in these judgments and they may change depending on how well our moderators have slept the night before or how the air pressure affects their sinusitus.
That would seem to comply with the law while enabling the social network to maintain maximum flexibility.
I don't think that would comply with the reporting requirements. And it sounds like their real concern is that it'd provide a way to use those reporting requirements, and the discretion as to what consitutes good faith compliance, to pressure the company into taking actions.
There also appears to be a real free speech issue here that is at least somewhat related to litigation that has occurred regarding editorial freedom of the press.
California just requires transparency of the policies. They just need to put in writing that free speech absolutist Musk is responsible for all the moderation of policies like censuring the account that tracked his plane with public data.
Of course not - a HN posts are hardly long enough to contain the real policy which would undoubtedly be many pages long. hirundo is posting a hypothetical, exaggerated example.
But Twitter could have a policy that gave them great leeway to make arbitrary decisions, by stating things like:
* We use automated systems, which ensure a timely response and let us stay on top of many millions of tweets per day. However, these automated systems occasionally err in both taking down reasonable content and leaving up unreasonable content. We are constantly improving these systems, but with x00 million tweets per day some errors are inevitable.
* Bright line rules are not always possible in moderation. For example, we would generally not censor images of Michelangelo's David, the Venus de Milo, or napalm girl, despite a general policy of not showing genitals or female-presenting nipples.
Similarly, a post might be parody, sarcasm, exaggeration for comic effect, or otherwise acceptable. For example, we might allow NWA's 1988 hit protest song "Fuck tha Police", or a tweet by Donald Trump threatening nuclear war with North Korea, despite a general prohibition on calls to violence.
Decisions are made by our moderation teams on a case-by-case basis, following the vague subjective guidelines found in appendix A
* We hire our moderation workforce from around the globe valuing diversity and multiple perspectives and timezones blah blah therefore details that may be obvious to some audiences may sometimes be overlooked by our moderation teams
* The fact a tweet has been reported by a large number of people does not necessarily mean it will be blocked, as some more visible accounts may attract more flags proportional to their larger audience. Twitter also suffers from organised 'downvote rings' which systematically flag posts by their political opponents, hoping to reduce the reach of those posts. We may ignore such reports, when we detect them.
* Content moderation is interlinked with spam, astroturfing and bot accounts. For example we may block posts about buying cheap viagra, despite it not being extreme content or hate speech.
* Our automated systems may identify accounts as suspected bot or spam accounts based on heuristics that may not be obviously related to our moderation policies. Even a newly opened account that has never sent a single tweet or followed a single person may be blocked.
* The ever changing spam environment, the ever-changing language used by Twitter users, and the cat-and-mouse game with people who'd like to not be moderated mean the guidance given to our moderators changes on a daily basis. This document is a snapshot and our actions in the past and in the future may be inconsistent with it.
* We are committed to helping users stay safe and control their twitter experience, through options like Unfollow, Filter Notifications, Show Less Often, Mute, Block and Report.
Expand with a few more pages of waffle and voilà, you have a policy that doesn't actually pin you down to taking any particular action in any particular situation.
Since the moderation practices that must be disclosed also cover things like hate speech or harassment and other generally illegal speech, then the platform can't simply answer in that fashion, because the court will issue a default judgment against the "social media company that has not made a reasonable, good faith attempt to comply", as per AB 587.
Also, the "feelings" of the moderators imply human moderators exist and are the norm. This might not be true and you would be submitting false information for an AI-heavy moderation pipeline.
Hate speech and most speech that falls under the category presently described by social media users in 2023 as "harassment" is decidedly not illegal, and is in fact protected expression.
Threats are not protected, but the vaguely defined concept of "hate speech" that is not already-illegal direct threats of violence isn't really a thing.
This seems to be a common misconception. If it were violent threats, which are illegal, people would just call it that. When people say "hate speech" they are attempting to promote censorship of otherwise-legal expression that most of society nevertheless finds repugnant.
You are mixing many different viewpoints here and it is diluting whatever point you were making.
Harassment has a legal definition and so a law referring to it is not "as described by social media users"...
You then decide by fiat that the entire thing is pointless because of how you feel which ignores the very real problems on social media of exactly the kinds of messaging you claim this isn't about.
I think there is a pretty good case to be made here that AB 587 is unconstitutional and violates the 1A. Other states have made similar laws and those are being challenged up to the Supreme Court. I bet this one will too unless California overturns it. https://www.concordlawschool.edu/blog/news/california-social...
Not a Twitter user or a Musk fan, so not defending them. But it's hard for me to see how this law wouldn't chill speech...
I've mentioned this before, but the speech of Twitter users isn't Twitter's speech. So it seems laughable for them to claim this affects their speech whatsoever. Add to that the fact that Twitter (and all social media) are absolutely not interested in making their user's speech their own and being responsible for it.
There's not even any compelled speech argument that Twitter is being forced to speak about something it does not want to. Twitter is a corporation, not a person. It has no right to exist and can have its business license revoked.
Now, if Twitter does want to make each Twitter user's speech its own speech, then maybe Twitter might have grounds to make a 1A stand. But as is Twitter and all social media absolutely do not want to be held responsible for what they publish so, frankly, they can pound sand.
It would be different if Twitter were considered similar to a newspaper where Twitter acted as editor and could be held accountable for what they publish. But they will fight that tooth and nail.
> So it seems laughable for them to claim this affects their speech whatsoever.
AFAICS the proposal doesn't require them to moderate Twitter users in any way; it requires Twitter to state publicly and clearly what their moderation policies are. But arguably stating its moderation policies is "compelled speech".
Thing is, there's plenty of compelled speech going on, including food labelling laws. There's precedent (although I understand US law doesn't have "precedent" as such, just appeals to successive senior courts).
The tricky thing here is whether Twitter is required to have explicit and clear moderation policies that actually determine the decisions they are likely to take or whether their (constitutionally protected!) right to moderation includes the right to moderate absolutely arbitrarily, and if compelled to publish their policies, then just truthfully state that each decision might be solely determined by the mood of whoever decides that day?
The sine qua non of corporations is compelled speech. A corporation can't exist without articles of incorporation / corporate charter and it needs written policies for basically everything it does to keep the corporate veil intact.
Yeah, exactly. There's no 1A territory here. The idea that a commercial business cannot be regulated and compelled by a State to produce a mere reporting of its relevant business activities is completely bonkers.
It would be even more bonkers if government could regulate without any restrictions.
Can California demand that a restaurant provides daily report on how often each employee washes their hands and fine them $15k per day if they don't or provide incorrect information?
Per your simplistic argument it would be bonkers if California could create such law. It's a mere reporting of its relevant business activity.
To me it would be bonkers if California could create such onerous regulation.
In law analogies only get you so far. I'm pretty sure that things like food labeling were challenged in courts and were found to not violate 1st amendment but the onerous requirements California tries to impose here are not even close to use them as some kind of precedent.
There are always consequences to regulations and practical restrictions seen at the ballot box. If the reporting is justifiable and required of all restaurants, then... so be it? If the People are unconvinced that it's necessary or they don't like it and think the government is out of touch, we throw the bums out. The very real risk of losing of power is what keeps them from passing the sorts of "ridiculous" regulations you are proposing.
I don't see why California should not be able to pass such a law? It'd be a really dumb law but the constitution does not try to forbid bad overly onerous laws as long as they aren't specifically targeted at one person.
I haven't looked at this in a bit, but I thought that companies are exempt from their users' speech only if they don't make editorial decisions (and Twitter does, like reddit or HN or Slashdot). Am I mistaken?
EDIT: I replied to this before refreshing and seeing sibling replies. So I guess all I have to add is another link.
On the contrary, Section 230 is what allows platforms to make some editorial decisions without thereby assuming liability for all user-generated content.
That is something oft-repeated in conservative circles without a shred of basis in law. In fact, Section 230(c)(2) of the CDA does the exact opposite. It was written to explicitly _grant_ immunity when companies moderate in good faith.
The Texas and Florida laws are not equivalent to the CA law:
Texas HB 20 prohibits the censorship of content based on viewpoint
Florida SB 7072 ... prohibits platforms from removing certain types of accounts.
So both laws require companies to engage in a type of speech. The CA law does no such thing: it just requires companies to disclose their policy and how it was applied.
If these "laws" are deemed valid under protection from the first amendment, then why are the laws necessary anyways? Why not just go after the offender for violating the first amendment?
Maybe I'm misunderstanding what you're saying, but only the government can violate the 1st amendment, not companies or individuals.
Note that that doesn't mean a company cannot violate a law with a basis in free speech. The 1st amendment isn't the only law when it comes to this - this is where state laws step in.
Uh... because that's literally what the First Amendment says?
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
The 14th amendment later extended this to apply to state governments as well - but still only governments.
But a social platform can very much violate someone's first amendment rights. Which is what all of this hubbub is about. So we're saying that because we've said specifically gov't can't, we then now have to go back and make a law that says "neither than corporations, other citizens, and any other party" kind of legally worded something?
There might be a misunderstanding here? The Bill of Rights (the first ten amendments) specifically applies to limiting the powers of the federal government. It doesn't grant rights to individuals, it takes away powers from Congress. It doesn't protect you against other citizens, but against government overreach.
A company can't violate someone's free speech (or other rights) because it's not the government. It literally isn't under the purview of the amendments. If it does something illegal, that is become that act is made illegal under some other code, not because of the bill of rights. They can be prosecuted for criminal or civil violations, but it wouldn't be a constitutional issue unless the government is the one doing it.
If a social media site wants to silence you, too bad. If the government wants to silence you, that's a big deal.
(Edit: However, your argument isn't unheard of. Here's one opinion from the American Bar Association arguing the same thing -- that the 1A ought to apply to privately run public forums too: https://www.americanbar.org/groups/crsj/publications/human_r...)
We'd have to make that law if we wanted to put that constraint on corporate speech, yes. The fact government can't kick you out of the public square for questioning government doesn't mean I, a homeowner, can't kick you out of my living room for questioning my taste in home decoration (or that Altria, a cigarette company, can't remove you from their property for yelling to all and sundry in their lobby that cigarettes cause cancer, even if it's true).
It's not clear such a law is wise. ¿Cui bono? for tying private forum's hands against kicking Nazis, white supremacists, nation of Islam, Patriot Front, Proud Boys, etc. off their website?
The fundamental problem is the US's negative framing of rights. A negative framing that is only in terms of the de jure government, to avoid logical contradiction. This leads to the perverse winner-take-all argument that corporate censorship is itself protected corporate speech. A positive framing of the right to free speech would allow for a logically-imperfect yet equitable balance between the interests of individuals' speech and corporate speech.
> This holding was possible because California's constitution contains an affirmative right of free speech which has been liberally construed by the Supreme Court of California, while the federal constitution's First Amendment contains only a negative command to Congress to not abridge the freedom of speech. This distinction was significant because the U.S. Supreme Court had already held that under the federal First Amendment, there was no implied right of free speech within a private shopping center.[4] The Pruneyard case, therefore, raised the question of whether an implied right of free speech could arise under a state constitution without conflicting with the federal Constitution. In answering yes to that question, the Court rejected the shopping center's argument that California's broader free speech right amounted to a "taking" of the shopping center under federal constitutional law.
I generally believe that companies that enter spaces that previously afforded "users" first amendment protection should have similar laws to protect the free speech of users in said privately owned "spaces".
> I generally believe that companies that enter spaces that previously afforded "users" first amendment protection should have similar laws to protect the free speech of users in said privately owned "spaces".
What spaces are those? From where I sit, we went from BBSes (where the admin controlled the signal) to self-hosting (where the user controlled the signal, unless it was so onerous that an ISP cut service) to cloud-hosting (where a company controls the signal).
The ideal free speech world where people just spoke their minds online is a fantasy borne more out of lack of interest to regulate than power to regulate. Remember, the original network was a "network of peers" where most users were not pseudonymous; their academic or military sponsors could cut their access.
The network has never had anything like absolute free speech. Communication over the network has always been an affair involving at least two parties who must agree on the transmission of signal.
In the town square, the government can't remove you but a fellow citizen can just stand there and scream "NOOOOPE!" to everything you say, over and over again. And the local grocer is allowed to turn you away as a result of those things you were screaming in the town square earlier about their daughter.
Snail mail is a great example, but the burden of proof is on those who think the Internet should work the same to justify why a model baked into federal Constitutional requirements should be used to justify de facto nationalization of private infrastructure. Now, if somebody wanted to put the tax dollars up for the federal government to build out "the people's Internet," regulated with something akin to first Amendment protections for transiting the network and rights to privacy, I certainly wouldn't be against it.
In general, the corporate liberty to choose who to signal boost and what signal to drop isn't even a speech question; it's a freedom-of-the-press or freedom-of-association question. The fact someone said something to me (or to a company) compels neither of us to repeat it like a parrot. Similarly, the things I post here are not my own and dang can decide to drop this whole post (or the whole thread) if he wants; the fact I run a Mastodon node doesn't imply I have to peer to anyone, de-peer from anyone, or allow anyone's scribblings to traverse my node, &c.
The Internet is a network; communication on a network implies (at least) bipartite agreement. Either side of the conversation has veto over the signal.
Your comment is just describing the winner-take-all negative framing environment in a more colorful manner, while willfully ignoring where the abstraction falls apart. For example leaning on the idea that communication requires a "bipartite" agreement is a bit rich when the sheer majority of web communication involves at least three parties.
FWIW you can apply your argument to an ISP to make the argument that it's right for them to arbitrarily censor as well.
That's just two bipartite agreements end-to-end. I could technically have said an n-ary agreement, with the freedom to select multiple paths, for more accuracy; I figured "bipartite" encompassed all the relevant nuance.
> FWIW you can apply your argument to an ISP to make the argument that it's right for them to arbitrarily censor as well.
Yes and. ISPs generally do have that liberty and do employ it when they deem it necessary (see Hurricane Electric cutting ties with IncogNET for doing business with KiwiFarms, various ISPs cutting user accounts for piracy, AUP, or TOS violations, etc.). The backstop against that behavior is generally that they're leaving money on the table if they don't provide as many people as much service as their system capacity permits. But in general, legally or morally, nobody is entitled to more access than showing up at the public library to use a community kiosk (and if you do that and look at porn, they can ban you too).
"Bipartite" most certainly does not encompass all the nuance, and actually functions to confuse. In a scenario of two parties, one wanting to communicate and the other one wanting to not listen, it's quite clear that the second party is not obliged to listen.
Whereas these three (or more) party situations are exactly what we're discussing here - where two endpoint parties are trying to communicate yet there is a third party in the middle playing the position of censor. This is the entire crux of the matter.
When you say that ISPs do have the liberty to censor, are you speaking positively or purely descriptively? Because what I see is a description of our traditional values that had been previously serving to foster an open society, but have now fallen apart with companies scaling to own much larger portions of markets, prying into what should be the business of their customers, and colluding with each other to create unavoidable de facto governmentesque power. If you really don't see the problem with an ISP censoring, then how about the electric company?
Also the idea of companies being incentivized to not unjustly deny service because they'd be leaving business on the table is blatantly fallacious.
> where two endpoint parties are trying to communicate yet there is a third party in the middle playing the position of censor. This is the entire crux of the matter.
Hey if they want to keep communicating they can send letters. There are plenty of communication solutions that don't involve conscripting a private third party to transit the data against their will.
> what I see is a description of our traditional values that had been previously serving to foster an open society, but have now fallen apart with companies scaling to own much larger portions of markets, prying into what should be the business of their customers, and colluding with each other to create unavoidable de facto governmentesque power.
What I see is the logical conclusion of that open society. If people didn't know we were going here they weren't paying attention. Additionally, the unfettered open society brings us KiwiFarms and 4chan; I think the burden of proof is on those who support this model to explain why companies should be forced to facilitate those actors on the Internet.
You can still set up your own ISP If you don't like how the others are playing. You can't do that if the government starts putting constraints on what an ISP is.
> If you really don't see the problem with an ISP censoring, then how about the electric company?
Entirely different scenario. Electric companies have government-sanctioned local distribution monopolies; those monopolies come with additional constraints on their corporate behavior.
Hypothetically, one could argue that in the US at least, this fact constrains Verizon and Comcast's hands. But it doesn't constrain Twitter, or Hurricane Electric, or Reddit, or Google, or Facebook, etc., etc., etc. If you don't like those companies' behavior you can go to another or set up your own. I didn't like Twitter; I run a Mastodon node.
> Hey if they want to keep communicating they can send letters ... You can still set up your own ISP If you don't like how the others are playing
If you're trying to make jokes, then you need to work on your delivery.
> What I see is the logical conclusion of that open society. If people didn't know we were going here they weren't paying attention
I generally hate references to the paradox of tolerance, but you're essentially advocating intolerance that takes advantage of tolerance. I'd say that when the logical conclusions of rules that currently lead to an open society are abused to create an ever-more closed society, those rules need to be amended - it's foolish to assume that the outcome is moral by construction.
Even when pushed you seem to be pretty explicitly support corporate authoritarianism, so I doubt I'm going to change your mind. But as I said in another comment - it's quite comfortable to cheerlead for authoritarianism when it aligns with your preferences, but the winds can change at any time.
> it's foolish to assume that the outcome is moral by construction
I believe it's also foolish to assume the outcome of forcing people to transit signal harmful to them is moral.
The nice thing about the status quo is we can address this problem with our own choices: if one doesn't like HE's behavior, for instance, one can de-peer from HE. The 'net can interpret censorship as damage and route around it, as it were.
I'm not so much in favor of authoritarianism as individual sovereignty, and sometimes the "individual" is a corporation or large association. But so long as it can be routed around, it's fine (and if it cannot be routed around, we have antitrust for that). But consider Mastodon, where individual users are continuously deciding who to follow and who to block and node operators are continuously deciding similar questions at the node-to-node peer level. It's beautiful chaos, not authoritarianism. What business would the State of California have dictating to every Mastodon admin the rules of their node (or, for that matter: ¿cui bono? from the state even tracking those policies? Maybe it's none of the state's damn business who I let on my node and who I peer / de-peer from.
> forcing people to transit signal harmful to them
This is like the third time you've swapped in a sympathetically-small example to make it sound as if any regulation of large companies implies that grassroots individuals would be forced to unreasonably do things against their will. But we're not discussing a vibrant competitive landscape of Mastodon instances, local mom-and-pop ISPs, websites run by individuals, or "people". Rather we're talking about the likes of Twitter, Google, and Comcast.
> I'm not so much in favor of authoritarianism as individual sovereignty, and sometimes the "individual" is a corporation or large association
I wholeheartedly agree with individual sovereignty, and that's precisely where my comment is coming from. The second part is fallacious induction - scale is the entire crux of the matter, especially with the context of Metcalfe's law. The larger these companies get, and the more they cooperate with each other, the more inescapable interacting with them becomes. This destroys the ideal of individual sovereignty for actual individuals.
Hand waving about "antitrust" is an excuse that doesn't redeem the narrative. If you were actually concerned about coercive power created through anticompetitive actions, you'd lead with the utter lack of anti-trust enforcement and arrive at very different conclusion. For example reforming this bundling of identity, data hosting, and software that is pervasive across the industry would go a long way to reforming the power dynamics.
On the topic of actually running more decentralized nodes, I'm all for it. If you want to discuss things non-normatively as an assumption of the worst case fusing of government and corporate power, I'm right there with you. But in the context of the big tech power consolidation, what are currently small-time self-help options for the technical few to hide from the overall trend aren't particularly relevant. For example even though I've always run my own mail server and recommend people get their own domains, I can still condemn the negative societal effects of Gmail having unilateral control over many individuals' online identities. (In fact doing so gets even more important because Google blazing the trail of unilateral corporate authoritarianism has caused the domain registries to want to get in on the shameless action)
I think we can agree to disagree at this point. You believe that corporate power will lead to authoritarianism and the only way to stop that is a stomp on that power at the cost of liberties (because we can't just regulate Google or Twitter; California's registration policy nets up every Mastodon node and every home gardening forum and every still existent BBS). I think the liberties are more important and people should have authority to shape the network, and government intervention hampers that freedom. You see authoritarianism as coming from corporations; I see it as coming from the government.
I'm also extremely leery of anything that makes it more difficult for any operator in this network to put up a sign that says "This is not a Nazi bar," even if it at the same time makes it easier for somebody to put up a sign that says "This is not a gay bar." Because I'm not worried about the authoritarian threat posed by unfettered communication in the LGBTQIA community, but I sure am worried about it from the white supremacist community.
> You see authoritarianism as coming from corporations; I see it as coming from the government.
No, I see authoritarianism as coming from both government and corporations. Government is logically equivalent to a monopolistic corporation where your only way to end the contract is by physically moving off from the land it has an ownership interest in. Individual freedom can only exist when both government and corporate power are kept in check. Government power through democracy, separation of powers, and bureaucracy. Corporate power through competition, exit, and regulation. If either one is allowed to run amok to the culmination of its own desires, the end result is centralized control and diminished individual freedom. (And since power coalesces regardless of the type, these imperatives generally feed each other)
> You believe that the only way to stop that is a stomp on that power at the cost of liberties
No - I do not believe it is correct, and in fact it is quite perverse and grotesque, to characterize corporate control as "liberty".
> California's registration policy nets up every Mastodon node and every home gardening forum and every still existent BBS
AFAICT this was debunked in the first round of comments: https://news.ycombinator.com/item?id=37469593 . As I've been saying, the key property here is scale, which you seem to be willfully ignored. Please stop standing in tiny, sympathetic, and utterly wrong examples as if they have anything to do with corporate power. Extrapolating about what's right for an individual or small collective to a large corporation/LLC is completely fallacious.
> Because I'm not worried about the authoritarian threat posed by unfettered communication in the LGBTQIA community
Didn't you kind of mismatch the analogy here? More appropriate would be saying that you're not worried about the threat posed to the LGBTQIA community by unfettered censorship. Which is really an appeal to popularity as how you perceive it currently - transplant Big Tech into the 80's enforcing popular social mores, and things would look much different. Just as how big business happily bends to the whims of China.
> I sure am worried about [the authoritarian threat posed by unfettered communication] from the white supremacist community.
This is why I feel justified characterizing your argument as cheerleading for authoritarianism. In order to actually stop the authoritarian threat from the white supremacist community, it is not enough to have the popular bars prohibit Nazism. Rather to stop white supremacists organizing, you have to control all avenues of communication. In other words, reinventing de facto governmental power through the private sector where the constitutional embodiments of natural rights don't apply.
I think the ambiguity of having speech, religion, and press all rolled into one short paragraph is part of why we've historically struggled so much with the 1A.
I wish we could just scrap the Constitution and start over with a version controlled and annotated wiki of rules written in everyday language.
Our legal framework is so undemocratic (maybe even anti-democratic) and unable to address any of the major challenges of modern society... sigh. End rant.
How would a similar situation play out in other countries? My understanding is that the US already has some of the strongest free speech protections despite this (compared to the UK and EU especially). Are there countries, at least ones with strong rule of law, that do this differently and grant individuals rights inalienable by anyone?
I don't know that there is an existing country with stronger protections for free speech than the US, and it is true we do take that for granted. My point is that the negative framing in terms of the de jure government has reached its limits, with those seeking control essentially reinventing governance through "private" entities with "voluntary" agreements that in reality aren't so private or voluntary.
It's quite tempting to cheer for autocratic corporate power when it appears to be doing things that you find favorable, but the wind can shift on a dime.
The states don't have the time and money to go after millions of individuals, so they try to curtail it at the provider side instead. Also to send a message. A lot of it is just political virtue signaling.
(edit: sorry, I misread the parent. Individuals cannot violate the 1A, only governments can)
It's necessary because some (let's face it, X) social media companies are suspected of lying (aka engaging in fraud) to consumers by claiming that they are taking aggressive action against hate speech while not actually doing so.
Sure, they could just take them to court for that fraud and get that information via discovery, but just compelling companies to publicly disclose the info bypasses all that and puts the decision making back into the hands of the consumers. And what is more American than that?
IANAL, but I think it's a slippery slope that may fall too close to prior restraint. It is specifically asking a private entity to report on what it considers hate speech (which itself is still 1A protected), etc.
Having a published content moderation policy would presumably mean that companies are expected to abide by it, which limits their editorial freedoms. Asking a company to be consistent in its approach to speech is still a restriction on its speech. Most people and companies are hypocritical and self-serving when opportunities arise, but that's still protected.
Maybe the reporting law doesn't itself prohibit a company from acting against its published terms, but it sure creates a database that could easily be weaponized against them by a future law or unchecked executive order.
This is a leap. If they publicly espouse some sort of policy and don't adhere to it, it makes them look hypocritical - but I don't see how this limits their editorial freedoms. They can change the policy, or actually adhere to it.
> Maybe the reporting law doesn't itself prohibit a company from acting against its published terms, but it sure creates a database that could easily be weaponized against them by a future law or unchecked executive order.
I mean, I guess? I feel like the same could be said for laws that require worker injury reporting. If at some point in the future it's decided that there should be some regulation around moderation policies (and this seems increasingly inevitable) it would be useful to have some actual data to work off.
The moderation policy could even be "whatever the moderator feels like on a given day". That might get them negative publicity, but it's a policy, and satisfies the law.
Yes, but realistically, what's going to happen the moment they post a policy like that? The media jumps on them, (more) advertisers pull out, etc. That wouldn't have happened if they weren't compelled to publish these content decisions upfront.
It might be close enough to be considered a chilling effect? It's ultimately up to debate by high-powered lawyers. I don't think ANY 1A issue is ever clear-cut...
It's often the case that if a law were to have the potential to chill speech that would otherwise be expressed or not expressed (and I think this law falls under that category), it's open to questioning at least
I think it would have a chilling effect on social media providers who will have to think "how will our policies and practices look to California" (and other states with similar laws). If they have to report the changes every year, it basically strongly pressures a company to report that they are following whatever is politically correct at the moment. It's a slippery slope that approaches/enables prior restraint.
The law has no penalties for the policy you implement. So AB 587 isn't restraining any speech. If CA passes another law that uses the information to restrain speech, then that law may be unconstitutional. But AB 587 doesnt become unconstitutional by imagining all the ways it could be used unconstitutionally.
That's not how the Supreme Court has ruled in the past. If a law makes people reluctant to exercise their first amendment rights because of the ways in which it can be used against them in the future - then it has a "chilling effect" and can be unconstitutional even if the law itself has no penalties.
In Lamont v. Postmaster General [1], the Supreme Court struck down a law requiring the recipient of Communist propaganda to state that they consented to receive it before it would be delivered. There was no penalty for doing so, but the court rule unanimously that it "imposes on the addressee an affirmative obligation which amounts to an unconstitutional limitation of his rights under the First Amendment."
It's a pretty broad judgment call, isn't it? If it effectively chills speech, it could be a 1A issue. I guess it's up to the SC to interpret.
Thr lawsuit itself points out that, if nothing else, the California bill forces companies to editorialize on what speech is considered hateful etc. under California's guidelines. That categorization itself is a modern politicized process (especially in polarized states, deep blue or deep red) and different from older 1A protections. That wasn't my argument, but is part of the lawsuit.
>Even if you don't like anything about Elon Musk’s leadership of X,
From TFA.
>Not a Twitter user or a Musk fan,
From your comment.
It sucks that the level of debate all over the place, including this forum full of people who are more educated than average, has fallen so low that the (extremely simple to grasp) concept of ad hominem is out of place and one has to write such disclaimers frequently in order to bring an argument into a discussion.
One of of the two “disclaimers” you cite is an common attempt to reinforce one’s own an argument by painting it as a statement against bias, which is a subtle form (positive) ad hominem, not an attempt to avoid ad hominem and focus on pure rational argumentation.
"Slippery slope" and "ad hominem" are the names of fallacies. A fallacy is a defective step in logical reasoning.
Hearers shouldn't be persuaded by fallacies; they're deployed in persuasive speech because (a) many people aren't familiar with logic, and find them persuasive, or (b) because the speaker isn't familiar with logic, or (c) because the speaker is out of logical arguments, but is still determined to win the argument.
I think it's only human, no? That we even have a fancy Latin phrase for it suggests it's a phenomenon that's been with us for a while, and that the ancients and the greats and the geniuses alike have had to work to keep in check.
I know I certainly had to hold that impulse in check, myself. At first I was like "Oh god, what shenanigans is that guy up to now." And then I read the article and realized wait, this guy I dislike might actually have a point this time.
I say that not to be protective, really, but to point out that I think they may have a case here even though I find their entire operation suspect.
I agree, this seems fairly cut and dry. People are jumping past the law as written to downstream actions that become feasible for the state if the law exists. This required reporting isn't the problem to attack though, it's those downstream actions.
Now perhaps (probably) there should be a mechanism in place to stop the gov't from even getting this far, but, AFAIK, such a mechanism does not yet exist and it's certainly not the first amendment.
Is there precedent for the government requiring traditional newspapers to submit reports like this explaining their editorial processes and priorities?
Sure, let's treat Twitter like a traditional newspaper, which would mean they'd be legally liable for all the content they "publish". I think given that they're monetizing extremist content, that'd be a really smart legal play for them.
I am asking if there is any legal precedent for the government forcing any traditional publisher to explain what legal speech they choose or refuse to publish. If a newspaper decided they want to publish a bunch of letters sent to them by the public (as they do), is there precedent for the government compelling the newspaper to explain the basis by which they choose which letters to publish or forbid?
I think this is a case of "it's different because computers", and this wouldn't fly if it were a traditional publisher putting ink on paper.
> I am asking if there is any legal precedent for the government forcing any traditional publisher to explain what legal speech they choose or refuse to publish.
Either you're failing to understand the question, or you're trying to be disingenuous about it.
Newspapers editorialize their content. Elon Musk's Twitter is not, and is claiming to have no control over what content third parties publish throught their service.
However, Elon Musk's Twitter is also manipulating the contents that third parties publish through their service by means of moderation/censorship and boosting.
Given they claim they hold no editorial control over what goes through their pipes but still manipulate the content, they have a responsibility to demonstrate that they are not liable for that content by specifying exactly which rules they enforce and how they enforce them.
Do you understand the difference between assuming responsibility over the content, and claiming that they are not liable for the content they distribute because they don't pick and choose what goes through their pipes?
> If users can still see original tweet content (...)
It was already well established that they can't. For example, when Elon Musk's Twitter was already caught shadow-banning people posting pro-Ukraine content, and when Elon Musk open-sourced some of the original Twitter's code, people found out that it hardcoded settings to ban discussions on Russia's invasion of Ukraine.
Past HN discussion on Musk's censorship of the invasion of Ukraine:
it did not ban discussions, the algorithm was for ranking tweets in timeline.
because you cannot fit all trillions of tweets into everyone's timeline - by definition some tweets will have to be chosen over the overs.
if a user posted something about Ukraine, his tweet would still show up in his follower's timeline for "Following", just not Featured (For You) and not always. Mind you I follow this topic and my feed (For You) is like 50% of tweets covering Ukr-Rus war updates. So I never feel any censorship for the war from titter.
it is not censorship, because if you follow a user you would still see tweets in its original content without any altering
> it did not ban discussions, the algorithm was for ranking tweets in timeline.
Not really. Twitter was hardcoded to downrank discussions on Russia's invasion of Ukraine, and is now shadow-banning pro-Ukraine users but strangely not pro-Russia.
Nevertheless, the whole point is that Elon Musk's Twitter needs to specify how it's censoring tweets, as they cannot claim they don't editorialize while actively suppressing and censoring discussions that contrasts with their personal preferences. Russia's invasion of Ukraine is a mere example on the broad editorial reach of Elon Musk's Twitter, and one where Elon Musk himself was already caught red-handed supporting the invaders.
In principle they have the same constitutional rights as newspapers. Unless a double standard is being applied "because computers", which is what I am teasing out. The courts are better about upholding the First Amendment when there is ink being put on paper, because that's something which is well established and understood. As soon as computers get involved, people get this idea that old rules no longer apply "because computers".
This is why the book PGP Source Code and Internals was published; because ink on paper makes the matter clear cut.
The so-called 'double standard' in AB-587 (defining 'social media platform') is entirely consistent with the existing federal 'double standard' set up a quarter-century ago by Section 230, which makes providers of 'interactive computer services' immunity from liability for content moderation decisions. This bill can be seen as oversight layered on top of those existing content-moderation liability protections.
If you want to tilt at 'because computers' windmills, Section 230 is the real target.
The difference isn't "because computers", the difference is that Twitter claims it has no editorial control over the content published on its platform while a newspaper claims complete editorial control. That's a huge functional difference.
> In principle they have the same constitutional rights as newspapers.
This sort of pointless truism means nothing at all. The crux of the matter boils down to the common carrier status, and whether companies can be held liable for the content being handled through their service. If a social media company editorializes the content it distributes, it's liable for each and any consequence of distributing it. Simple as that.
There's precedent for the government to force various different types of organizations to disclose information in the public interest. Newspapers don't partake in any of the activities that this disclosure rule addresses, so I don't think a court will find it to be a relevant comparison, because it has nothing to do with the kinds of activities the government is trying to regulate.
Newspapers enforce editorial policy over their opinion columns and things like which letters to publish. This law sounds like, if it actually applied to them, which is doesn't, they would need to submit a report about why they didn't allow "Letter from Aunt Ethel" to be published. What criteria did they use in not allowing it to be published, etc.
Why did they publish "Letter from Uncle Frank" instead.
Simply, they need to enumerate their internal editorial policies.
And I can see it argued that editorial policies are political speech. We all know magazines or publications and websites that represent a political philosophy, what articles they publish, how the articles are worded, etc. That's everywhere. "Why didn't you allow XXX to be published? Because we don't like the point of view of XXX!"
And part of having a political philosophy and the speech rights around it is that you should be allowed, should be "free" that is, to not say anything at all! When you start getting "have you stopped beating your wife" questions, you should be allowed to say "no comment". Compelled speech is not freedom of speech. Publishing in detail your moderation policies and activities is arguably compelled release of a political philosophy. Now, does a Corporation have such a right? Today, it's hard to say. If a Corporation apparently has the right to impact elections (economically) that implies it has a political position and perhaps that the detailing of that position (or not) should be a protected. IANAL.
The big problem here is this shouldn't be an issue. How many websites have a privacy policy of "We're going to gather everything we can from you, and sell it to the highest bidder. If we could get your DNA, we'd sell that too."
That should be a perfectly viable "privacy policy". If X has to publish "yes, the exact details" of how they moderate extreme speech, maybe they'll just publish "we don't" and leave it at that. Which, of course, brings even more, indirect, scrutiny.
In the end, there's no good answer to this question. Every answer any service would publish, is the "wrong answer".
But Twitter isn't a newspaper engaging in editorial decisions. It is a massive online communications platform that is moderating posts, most often, on an automated basis. I don't think there's any actual support for your comparisons at all.
>Simply, they need to enumerate their internal editorial policies.
I think you are confused, Twitter already publishes it's moderation policies, this is a report on to what extent they stay true to that. Your argument would maybe make more sense in a context where a newspaper had to provide a report on its editorial decisions (which obviously go well beyond the letters to the editor section). But that really only actually serves to the emphasize the huge difference between editing and moderating.
>In the end, there's no good answer to this question. Every answer any service would publish, is the "wrong answer".
I don't think you are engaging in this topic with any honest, especially if this is the conclusion you are coming to. As far as I can tell the only wrong answer to the report would be if it was not accurate about the information it stated or was incomplete. There seems to be nothing as regards to judging whether or not a moderation system is good or bad, or "does the right thing". Let alone something so politically pointed as what you suggested.
OP analysis is specious at best. This is why you shouldn’t ask non-lawyers for legal opinions.
Lol at the aggressive partisan downvoting. Sorry you feel compelled to such anti-Musk right-think, either sock accounts or a display of hackery. Either way, very stoopid.
I disagree. This seems like a blatant 1A violation by the state:
> AB 587 passed in September 2022, requiring social media platforms to submit a "terms of service report" semi-annually to California's attorney general, providing "a detailed description of content moderation practices used" and "information about whether, and if so how, the social media company defines and moderates" hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference. Under the law, social media platforms must also provide information and statistics on any content moderation actions taken in those categories.
All of the things listed, with the exception of foreign political interference, are protected expression. This is the government trying to outsource censorship.
The law does not appear to mandate any particular form of expression by Twitter, other than filing paperwork that documents their intended policies. Notably, it doesn't seem to require Twitter to have any particular policies around extremism; only that, if they have such policies, they should be made transparent to the public.
I'm not aware of a legal case in which a private entity has successfully made the "compelled speech" argument against government paperwork; that kind of argument is more typically associated with "the IRS is unconstitutional because it compels me to file taxes"-style legal crankery.
I'm sure the government's interest in what is or isn't censored, or how, is purely academic. That's why they had the report to to the attorney general's office and not the state university.
Pretty nice business you got there. Shame if something were to happen to it.
I disagree. "X" does those things as part of a profit generating enterprise; if those things affect the greater population then they absolutely should be disclosed. And before someone says "What about Meta? What about Google? What about Youtube?" Yep, them too. They should all disclose the basis of their moderation decisions. Because they're used to steer public opinion. Because it's been demonstrated that they have been used to manipulate public perception to the detriment of groups of people. Because people have died from misinformation.
I find it's important to avoid "is/ought" fallacies. I'm not sure my warning actually applies here -- this seems like a properly complicated legal topic and especially whenever something has a lot of contemporary relevance, there's a lot of unexpected flexibility in which precedents are considered vs. ignored. But it's a good rule of thumb to follow nonetheless.
The US political Left has wanted visibility into content moderation since at least the 2010s, so the idea that somehow this is "flip-flop" is disingenuous. The US political Right has wanted this information too, at least until they felt they had one of their own pulling the levers. Keep in mind that nothing about this policy mandates any content be posted or removed, nor does it impact X's ability to ban or not ban anyone they want for content they post (laws from Florida and Texas cannot claim the same).
The Washington Post quoted an expert as saying this law, A.B. 587, is "likely to be struck down as unconstitutional". This back in 2022 when the law passed—before Elon Musk attached himself to this story and polarized everyone.
Might be interesting to see if WaPo posts an article that slants in favor of the bill now that Musk is against it. That article was quite neutral, IMO.
Newspapers pretty regularly have articles on multiple sides of an issue, though sometimes one viewpoint is relegated to the opinion section.
Crucially however, "this law is unconstitutional for reason Q" does not mean that "Musk says making their business codify their moderation is against free speech" is correct.
Mike Masnick is probably one of the most critical people of Elon. You've may have read one of his articles prior, perhaps in the early days when Elon bought Twitter. His analysis of Elon's obligations to purchase Twitter, criticism of Elon's approach to content moderation (and lack of understanding), it's all been pretty good and legally sound.
The fact that he is praising Elon here says something.
In theory, where people are perfect reasoners, it doesn't. Realistically, people tend to form opinions that fit the narratives they want. If someone who normally takes one side of a politicized issue instead takes the other side, that suggests that the case for the other side is particularly compelling because it compelled this person to switch sides.
I'm more interested in his qualifications in legal analysis to come to his conclusion that Musk's challenge is strong, and frankly, I don't really think his opinions on Musk have a strong bearing there. Especially given the fact that I don't think there is a reasonable legal basis to say the challenge is strong. So him being against Musk has gotten me nowhere in this.
Masnick is a long-time and respected tech blogger with a good legal knowledge as far as I can tell. He's article are usually pretty fact-based and well-informed. And as others have said, he's usually pretty critical about Musk. So yes, I think his opinion on this matter is to be taken seriously.
It's not really so much "Mike Masnick's opinion" as "Masnick explaining his TechDirt colleague's, Eric Goldman's, opinion".
Eric Goldman is a "leading expert in the fields of Internet Law" [0]. His assessment of this bill's constitutionality is probably a good one. The Washington Post cited it in one of their stories [1], and quoted the part where he says it's "likely to be struck down as unconstitutional at substantial taxpayer expense".
From the definition in the bill how a "Social media platform" is defined has no requirement to userbase, so wouldn't every Mastodon server and small self hosted message board have to submit to this too or face fines?
>22680. This chapter shall not apply to a social media company that generated less than one hundred million dollars ($100,000,000) in gross revenue during the preceding calendar year.
I was about to type up a message about how this affects private services too, buuuuut... section 22680 pretty summarily removes self hosted instances. I doubt any of them are making $100,000,000 profit per year.
Not that I am a particular fan of Twitter, and I think forcing some transparency per how you moderate content is quite a good idea - but doing that at a state level is annoying. Every nation already has its own rules and laws, which quite makes sense for physical goods, but is much harder to track for software. The US should at least have one unified rule when it comes to these topics. I understand it's not really the way legislation is usually determined, but look at GDPR: even Europe (which is not a country) managed to do it in a centralized way.
This is actually one of Twitter legal arguments against this law: section 230 explicitly gives companies ability to moderate user submitted content as they please.
By putting additional restrictions on moderation activities (the requirement to report to government how they are moderating) this state law contradicts the federal law and the way things work, federal law wins.
We will have to wait to see if the judge / juries agree with this argument, but that's one of the reasons Twitter believes this law should be eliminated.
This is legally nonsensical - nothing about Section 230 precludes any further restriction on moderation by another entity. Section 230 is and has always been about the act of moderation not specifically creating liability for the carrier of user-generated content - nothing about Section 230 grants the publisher any specific right not to be subject to further moderation at another level.
The first amendment angle is a stronger argument, but unfortunately for X, it's also extremely weak and would require overriding decades of precedence and essentially create a crisis - there are far more onerous regulations in place that are still relied upon in a way that having them constitutionally challenged would be extremely damaging.
I think this is a good take - the legal theories mentioned in this thread (1st amendment, Section 230) seem mostly baseless, but this doesn't seem like appropriate legislation at the state-level either way. I don't know enough about the commerce clause restrictions on state regulations, but I do wonder if that comes into play given that moderation policies will almost always have interstate and international commerce implications.
Text of the law:
> (3) A statement of whether the current version of the terms of service defines each of the following categories of content, and, if so, the definitions of those categories, including any subcategories:
(A) Hate speech or racism.
(B) Extremism or radicalization.
(C) Disinformation or misinformation.
(D) Harassment.
(E) Foreign political interference.
I like to imagine what this would look like if the other side was in charge. Say Russia or Florida for that matter. (In case of Russia, one doesn’t have to imagine).
> (3) A statement of whether the current version of the terms of service defines each of the following categories of content, and, if so, the definitions of those categories, including any subcategories:
(A) LGBT propaganda.
(B) Woke ideology.
(C) Harassment of religious freedoms.
Not so hard to see how complying with this law would be compelled speech, is it now? Whether one says yes or no, you’re implicitly agreeing that these categories are real things. You can’t quite say “I disagree with the law as written” unless you sue, which is what’s happening here.
> you’re implicitly agreeing that these categories are real things
You have this the wrong way around: these things have legal definitions in both state and federal law, and your "agreement" is not a mandatory condition.
Put another way: the state of California is not interested in whether you think hate speech is real. The suit doesn't hinge on that at all; it hinges on whether the government can compel corporate speech that amounts to a disclosure of internal policies. Which it can, for the same reason that the government can compel businesses to do their taxes, file permits, and disclose their political contributions.
This seems to be at least part of the complaint if I’m reading it right:
> AB 587 thus mandates X Corp. to speak about sensitive, controversial topics about which it does not wish to speak in the hopes of pressuring X Corp. to limit constitutionally-protected content on its platform that the State apparently finds objectionable or undesirable. This violates the free speech rights granted to X Corp. under the First Amendment to the United States Constitution and Article I, Section 2, of the California Constitution.
As for the precedent,
> Which it can, for the same reason that the government can compel businesses to do their taxes, file permits, and disclose their political contributions.
My understanding is that what can be compelled is pretty limited (e.g. facts in advertising, ingredients, etc.), it’s not a blanket precedent for the state to ask you to produce anything at any time.
> AB 587 thus mandates X Corp. to speak about sensitive, controversial topics about which it does not wish to speak in the hopes of pressuring X Corp. to limit constitutionally-protected content on its platform that the State apparently finds objectionable or undesirable.
Except that it doesn't: being compelled to produce evidence of your policies (which the law doesn't even require exist) isn't the same thing as being compelled to adopt a position. The law stipulates the former, not the latter.
As a framing: when the EPA compels a corporation to produce an environmental impact statement, they're not compelling the corporation to assume a position on the environment or anthropogenic climate change. Being asked to produce internal policies on hate speech, etc. similarly doesn't compel any particular opinion on Twitter's part.
> My understanding is that what can be compelled is pretty limited (e.g. facts in advertising, ingredients, etc.), it’s not a blanket precedent for the state to ask you to produce anything at any time.
We're talking about factual materials that neither party disputes. Every state in the US has a complex web of transparency laws that compel companies to produce all kinds of information for reasons of public interest; this case is no different.
Examples: Salary & pay transparency laws, board and LLC transparency laws, etc.
No matter what your opinion is on these categories, "A statement of whether the current version of the terms of service defines each of the following categories of content" is a quite straightforward question - what do your terms of service say? How can you get from that to "implicitly agreeing that these categories are real things"? If your ToS mentions them, then you say yes, if it doesn't mention them, then you say no.
Saying "The currently active terms of service document (attached) does not define 'woke ideology' as a separate category of content" is a simple factual statement, and it does not imply anything that you didn't already write in that terms of service agreement; I see nothing wrong with companies being compelled to provide an answer to this - it is relevant factual information about the product they're distributing to people in California; the companies do not have a constitutional right to keep all their internal documents secret, they can be compelled by law to disclose them.
The US 5th Circuit Court of Appeals ruled that certain administration officials – namely in the White House, the surgeon general, the US Centers for Disease Control and Prevention, and the Federal Bureau of Investigation – likely “coerced or significantly encouraged social media platforms to moderate content.”
What California wants is clarification and explanation of the moderation process as it applies to X. a product disclosure like this is common in nearly every other consumer product in the US. Prop 65 for example routinely mandates this sort of disclosure for lead or cadmium content in a product.
The reason musk specifically does not want to disclose this information is because the moderators were all sacked a year ago...i think California knows this.
It is not a question of what California wants. California is attempting to coerce. X is arguing that CA's coercion violates both the 1st amendment and Section 230. Glad to engage further if you or others want to address the merits of those arguments. No need to impugn intent onto specific individuals.
All laws are coercive. That's how laws work. So I don't even know what your first statement is trying to get at.
$15k/day -- 5M/year -- on companies over $100M in gross revenue (much less the several billion generated by Twitter) is not more coercive than many other laws. The penalties for some laws go up to and including death... so this is definitely within the typical range of penalties.
The law requires disclosing your policy and how you applied it. Musk is out on a limb if he's claiming that giving stats on what actions were taken is the same as the action itself.
> Prop 65 for example routinely mandates this sort of disclosure for lead or cadmium content in a product.
The dangerous (overt?) implication you're making is that some speech is "poisonous" and the government needs to step in and make sure the people aren't being "poisoned"
No? All they're requiring is clarification on what moderation (if any) is happening. This is almost the opposite of what you're describing, that more moderation you do the harder such clarifications become - if you do none then there's nothing to disclose.
Of course, arguably most people _want_ some minimum level of content moderation, so whether it's beneficial to do more or less content moderation is up to the company, they just have to disclose it.
> Words can kill, a Massachusetts Juvenile Court judge decided last Friday, when he found 20-year old Michelle Carter guilty of involuntary manslaughter in the 2014 suicide of her then-boyfriend, Conrad Roy III.
"""
The Court in Brandenburg, in a per curiam opinion, held that Ohio's Syndicalism law violated the First Amendment. According to the Court, "constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.
"""
The law does not restrict that. "Shouting fire in a crowded theatre" is a myth. Anyone parroting this immediately outs themselves as lacking even the most basic knowledge of first ammendment.
The court case where that quote came from was overturned 54 years ago!
I was not aware of that court case, but that is enlightening. I have much learning to do.
But based on reading through the report at the findlaw article in a sibling comment, (in my opinion) I think it's a pretty dangerous precedent, and definitely a pillar of the breakdown of modern political discourse.
"The act of shouting fire when there are no reasonable grounds for believing one exists is not in itself a crime, and nor would it be rendered a crime merely by having been carried out inside a theatre, crowded or otherwise."
Your other responses have correctly explained why you're wrong but I just want to add a little bit more context: "Shouting fire in a crowded theater" was a euphemism by Justice Holmes to describe the act of protesting the draft.
It did not come from a case about a theater and a human stampede as many naturally assume. It came from a case about a war protestor being arrested for telling people they should resist the draft (decidedly political speech.)
Instead of Prop 65, look at it as like nutritional labeling rules. There is no implication in having to list e.g. how much protein is an a serving of your product means that protein is dangerous.
Yes, but if you're not being pedantic for the sake of scoring points, all the aforementioned things are capable of causing significant harm if mishandled, so it's still quite relevant.
Having a post arbitrarily or even maliciously being removed from a social media platform being equated to physical harm is an extremely absurd, out of touch, pampered, and privileged position.
Mishandling of heavy metals can cause lifelong affects not just to those handling them, but to anyone in the vicinity.[0] it’s estimated that 1M people die per year from lead poisoning[1].
Content moderation cannot directly cause any physical harm. If you consider indirect physical harm related to all social media (which I’d have more sympathy toward), it would not come close to the affects of heavy metals and other substances known to the state of California to cause cancer, birth defects, or other reproductive harm.
But the government isn't allowed to regulate the speech based on harm unless that harm would result from lawless action which is also incited by the speech and probable to happen based on the speech.
People should really be more considering this as "Truth in advertising" type law. This isn't about whether Twitter follows it's policies as written, as you would be hard pressed to convict any company for merely not following it's own rules for something that isn't illegal (allowing inflammatory speech on your platform, or even signal boosting such speech), but more at requiring companies be honest about how they moderate their consumer participation.
This law is so consumers can make educated choices about the platforms they want to use.
That's a very good point and I have no arguments with it.
Unfortunately, nothing about the California law really addresses it. The Fifth Circuit Court decision regarding coercion of social media sites will bind to the states via the Fourteenth Amendment, so California can't really enforce anything if they disagree with a company's moderation policy.
That means the law reduces to perfunctory data collection, and it doesn't really tell consumers anything that logging into the site and going "Gee, this site sure is full of white supremacists advocating stochastic terrorism and nobody does anything about it" wouldn't tell them.
I don't understand why anyone would downvote this. Can't people ask questions these days? Especially questions that prompt significant discussions and clear the climate and misconceptions some of us have?
They are both under the jurisdiction of the US federal court system. Something applying to the white house means it applies to individual states as well.
I'm not sure that is necessarily true, but the essence of your point I think stands which is that it's likely that based on past outcomes that California could face a similar result as the case you are referring to.
Literally not in this case. Circuit courts establish binding precedence in their circuit, but not elsewhere. Out-of-circuit opinions can be used for persuasive evidence, but there is absolutely nothing that requires the 9th Circuit (which includes California) to listen to what the 5th Circuit says. Especially when the 5th Circuit is disagreeing with every other circuit to have considered the matter. [I haven't read the opinion in this case to know what it's asserting, but I do know that every opinion I did read on whether or not the government urged COVID-19 moderation qualified as unconstitutional state action concluded that the plaintiffs hadn't met their showing that it did.]
Not a lawyer but if that’s the case, could California say “Fine, but Twitter can no longer do business in California”?
Edit: not necessarily saying they should, I’m just
wondering if they can.
Edit 2: Looks like the most they could do is make it harder for social media companies in general to do business. If they were perceived as targeting Twitter then they could have grounds to sue.
Based on 20 minutes of reading so grain of salt applies.
Umm, the bill of rights is a set of restrictions on the _federal_ government. The last one is explicitly a statement that the states can do a lot of things that the federal government _can't_.
There is the supremacy clause, but goodness knows where that would end up here. _Everything_ involving real money or power seems to make it to the supreme court these days, and who knows what the political landscape will look like by the time it does (yes, I am asserting that the supreme court has become more political than it used to be, _and_ that it used to be pretty political...).
> the bill of rights is a set of restrictions on the _federal_ government
The First Amendment as it is literally worded is, since it specifically says "Congress shall make no law...". But the rest of the amendments have no such restriction; they just say certain things shall not be done, period. Given the Supremacy Clause, that means those provisions should apply to all levels of government, not just federal. (Granted, the courts originally did not interpret them that way, but IMO they should have.)
That said, current jurisprudence, regardless of the literal wording of the bill of rights, is that they apply to the States, even the First Amendment. IIRC most Supreme Court decisions along these lines have cited the Fourteenth Amendment.
> Umm, the bill of rights is a set of restrictions on the _federal_ government. The last one is explicitly a statement that the states can do a lot of things that the federal government _can't_.
Taken literally, yes. But legally, many (but not all) for the rights have been 'incorporated' to apply to the states. This includes First Amendment.
> Umm, the bill of rights is a set of restrictions on the _federal_ government.
That hasn't been the case since the ratification of the 14th Amendment way back in 1868.
All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside. No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.
Courts have repeatedly held that the Bill of Rights does apply to the states, by means of this so-called "due process clause" in the 14th Amendment.
Edit: changed "incorporation clause" to "due process clause", as that seems to be the name under which it is more generally known.
If Twitter solicits you to purchase Twitter Blue, that's Commercial Speech. If Twitter bans your account for praising Hitler, that's [Twitter exercising] political speech: Twitter would be protected by the First Amendment. The mere fact Twitter is a commercial, monetized service doesn't trigger a Twitter-wide First Amendment exception—any more than, say, the New York Times being a for-profit corporation opens the door for the feds to censor its political columns. Even if they're behind a paywall.
Commercial Speech is a narrow carve-out for "advertisements and solicitations". It's not applicable to Twitter moderation.
There is a good argument that company policies about product use is commercial speech. "Here take this opioid, we have funded studies that say it won't hurt you" got regulated pretty hard. "We think the 'woke mind virus' is worse than capital-F Fascism and will moderate that way" is very much about Twitter's product.
Sure, but Twitter banning (or not banning) your account for praising Hitler is political speech, however, Twitter stating "we are/aren't a platform for free speech" or "our moderation will/won't ban your account for praising Hitler" is commercial speech similar to it soliciting you to purchase Twitter blue, it's a statement about the media service they're offering as part of their business, and that's something that can be reasonably compelled.
The law does not make any requests about how Twitter should moderate things, it asks for information about how Twitter does moderate things. First amendment protection should ensure that government is prohibited to impose restrictions if a company says they will/won't ban accounts for praising Hitler, however, the people certainly have the right to take action in response to that, and the government has the right to compel Twitter to disclose to these people truthful information about their media product.
If they have a policy document stating "posts which contain more than three letters 'z' shall be deleted", they have a right to moderate this way if they wish - however, do they have a constitutional right to keep that policy document secret from the public? The way I see it, laws are permitted to regulate the disclosure of company policies.
Easily resolved: Remind X that they are responsible for all content published on their platform.
You cant expect to have immunity under section 230 if you arent going to provide "good samaritan" blocking and screening of offensive material.
The exemption is what has allowed the internet to become what it has today. Lawmakers have already threatened to change this in the past. I'd much rather they see the current law already handles bad actors than for them to introduce something "new".
That is probably the most useless yet technically correct link anyone has ever given here, so congratulations on that. :-)
For those who don't want to try to find in that nearly 50000 word bill that is a massive mix of additions and diffs updating a large part of telecommunication law, here is a link to where the small section that is relevant to this discussion ended up in codified in the US code [1].
Clarification: I am not saying that their interpretation of Section 230 is correct. I am just saying that the link they provided to the text of Section 230 is in fact a link to the text of Section 230.
>You cant expect to have immunity under section 230 if you arent going to provide "good samaritan" blocking and screening of offensive material.
In fact the law does not obligate service providers to moderate at all (except in accordance with a few narrow laws around sex trafficking, IP, and some criminal stuff). It does say that IF you moderate:
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
The phrase "offensive material", doesn't mean offensive to ME in particular or to a "reasonable person" necessarily but in context if means offensive to the service provider.
The good samaritan clause doesn't say that the provider of an internet service HAS to provide those things, just that you aren't considered the publisher or speaker if you DO moderate.
Compelling the company to disclose their internal moderation discussions is compelled speech. The 1st amendment provides broad protections here. The argument that their internal moderation discussions are like ingredients in a physical product is really flimsy.
Specifically, Section 230 makes clear that IF you make a good faith effort to remove illegal content from your platform, you will not be held liable for illegal content you miss.
That's it. Nothing about "publisher vs platform" or whatever. The entire point of the law, as designed, and intentionally, was to prevent every single website owner from being in violation of child porn laws. It is an EXTRA freedom, given to website operators.
They're blocking content (or demonetizing) for their own profit. Whether it's to keep advertisers or lower their payouts is a bit irrelevant. Maybe being forced (under penalty of law) to reveal at least the process of that moderation would make it more clear internally and both to users and generators what they're up to allowing "the product" to make decisions in a bit less of a vacuum. Darkness and opacity in moderation doesn't seem like it serves the public interest.
I think the part that they probably have issue with is the
"(5) (A) Information on content that was flagged by the social media company as content belonging to any of the categories described in paragraph (3), including all of the following:
(i) The total number of flagged items of content.
(ii) The total number of actioned items of content.
(iii) The total number of actioned items of content that resulted in action taken by the social media company against the user or group of users responsible for the content.
(iv) The total number of actioned items of content that were removed, demonetized, or deprioritized by the social media company.
(v) The number of times actioned items of content were viewed by users.
(vi) The number of times actioned items of content were shared, and the number of users that viewed the content before it was actioned.
(vii) The number of times users appealed social media company actions taken on that platform and the number of reversals of social media company actions on appeal disaggregated by each type of action.
(B) All information required by subparagraph (A) shall be disaggregated into the following categories:
(i) The category of content, including any relevant categories described in paragraph (3).
(ii) The type of content, including, but not limited to, posts, comments, messages, profiles of users, or groups of users.
(iii) The type of media of the content, including, but not limited to, text, images, and videos.
(iv) How the content was flagged, including, but not limited to, flagged by company employees or contractors, flagged by artificial intelligence software, flagged by community moderators, flagged by civil society partners, and flagged by users.
(v) How the content was actioned, including, but not limited to, actioned by company employees or contractors, actioned by artificial intelligence software, actioned by community moderators, actioned by civil society partners, and actioned by users."
Combined with this section of the bill
"(b) Actions for relief pursuant to this chapter shall be prosecuted exclusively in a court of competent jurisdiction by the Attorney General or a district attorney or by a county counsel authorized by agreement with the district attorney in actions involving violation of a county ordinance, or by a city attorney of a city having a population in excess of 750,000, or by a city attorney in a city and county or, with the consent of the district attorney, by a city prosecutor in a city having a full-time city prosecutor in the name of the people of the State of California upon their own complaint or upon the complaint of a board, officer, person, corporation, or association.
(c) If an action pursuant to this section is brought by the Attorney General, one-half of the penalty collected shall be paid to the treasurer of the county in which the judgment was entered, and one-half to the General Fund. If the action is brought by a district attorney or county counsel, the penalty collected shall be paid to the treasurer of the county in which the judgment was entered. If the action is brought by a city attorney or city prosecutor, one-half of the penalty collected shall be paid to the treasurer of the city in which the judgment was entered, and one-half to the treasurer of the county in which the judgment was entered."
Basically the law requires a full disclosure of all content moderation decisions and explicitly why they were made, and then gives incentive for various city DAs to bring suit by promising them a 50% cut of whatever money is made in the lawsuit.
It doesn't require disclosure of any content moderation decisions at all, but only how many times particular actions were taken.
Musk's objections are likely because it would disclose how little moderation is happening on the platform at all and would tend to show a collapse in usage of the platform.
I didn't realize how draconian and big brother-ish the law was:
> AB 587 passed in September 2022, requiring social media platforms to submit a "terms of service report" semi-annually to California's attorney general, providing "a detailed description of content moderation practices used" and "information about whether, and if so how, the social media company defines and moderates" hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference. Under the law, social media platforms must also provide information and statistics on any content moderation actions taken in those categories.
This is quite literally thought police. You literally have to send the state's lawyers details on how you moderate content to ensure it complies with whatever arbitrary thing the state believes in at the time. This is clearly not constitutional and the state has no business whatsoever dictating how speech is moderated.
> It does not dictate anything, just requires companies to report what they’re doing.
And what possible interest does the State of California have in gathering this information? It's close enough to have a chilling effect, and thus the state should have to provide a justification.
LOL, if we left everything to companies, they wouldn't even report the list of ingredients they put in your food. I get that governments can overreach, but not all constraints on private entities are bad.
Yes, there is an obvious public health interest in food labelling. I feel that the compelling interest is not obvious here, and given that the law is not censorship, but lives in the same zip code as censorship, the state should demonstrate a public interest being served that would withstand 1st amendment scrutiny.
Users, many of whom are California, should have the right to truthful information about the product they're using (which, for a media product, includes the moderation policies of that media), and the state has a reasonable interest in ensuring that users get this information.
No they don’t. There is no case law or constitution or bill that says you have this right. You have no rights to this private property. The state can have all the interests it wants but we have laws including property laws. The state can’t just say it has an interest in something and then take it. This country was literally founded on principals opposing that!
What "rights to private property"? It's about information, not property, and it is well established that the state can "just state an interest" in truthfulness about products sold to its citizens and compel a manufacturer to list all the ingredients they use in making their product and put that information right on the packaging, even if they'd prefer to keep them secret. Asking a media provider to list their moderation principles seems to me pretty much the same as asking a soap manufacturer to list all the things they put in their soap; in both cases their right to do something in their product does not include a right to hide from the public (or worse, mislead them) about what exactly that product includes.
One possible interest: You are living in a democracy. Social media has shown that there are strong tendencies to support abolishment of democracy (or how it is also called: fascism).
It is expected of a government to resist against any tendencies to abolish democratic rule and peaceful transition of power. That is the single most important task given to a government by us. And soley for that purpose we often allow them to wear a big stick — why? Because outside of democratic rule the law is a arbitrary set of words that can be interpreted by the ruling class in any way they please and that isn't good for the individual, unless you are foolish enough to believe you'd end up in that ruling class (that is 50% of the appeal of fascism).
Now as a government would have any right to forcefully resist coup attempts and such, it can (and does) also restrict certain social movements that aim for its abolishment. Usually this happens more on the left than on the right tho, remember the whole communism craze in the US?
Yes? I fail to see how giving me, a consumer, information about how the businesses I patronize operate is a bad thing.
Transparency about business operations is a good thing for consumers. We force it in the case of a "public" company in the interest of allowing people to make good investment decisions, why don't we allow such reports to benefit the non-investing public as well?
If it is actually in defence of democracy sure. I have lived in nations with a press council before and the press quality was better than in the ones without one.
In the end the only true defence are people that want to uphold democracy. If you have them, laws can be phrased as badly as you like, if you don't, then they can be phrased as well as you like, they will "only" buy you time.
If your gov. makes bad laws, don't elect them next time. And if your only defence of the freedom of the press is to make coroporations people (but only if it benefits them), those are bad laws.
Reporting stats is usually step 1 to regulating those things...
It won't be long before laws say things like "99% of takedowns must occur within 1 hour of a report being made" or "No more than 1% of users may see content which is later taken down on any given day, otherwise we will fine you for insufficient moderation"
That, combined with a court making a few decisions on moderation (months after the fact), means that the only way way for a platform to make sure that < 1% of users see content taken down by a court later is to remove all except the safest content right away before anyone sees it. End result: Self policing to only uncontroversial boring stuff.
1. Corporations having "free speech" is a fucking perversion of the original purpose of that law. Corporations are not people. Governments protecting corporate rights more than individual rights is the root of a good chunk of all problems the US is having right now.
2. I expect my government to scrutinize the hell out of corporations and protect the rights of individuals. Article 1 of the European Charter of Human Rights states: the dignity of man is inviolable. People having to be at the receiving end of discrimination, racism, sexism etc. is a violation of their dignity — their rights. Rights I expect them to protect.
Now I always thought of people in the US to be adamant about individual freedoms, but maybe I have been wrong and it is more about shilling for corporations and letting them divide you into small, easy to manage camps with the help of corporate media and lobbied politicians that get surprisingly rich when they are in office.
And yet so-called "reverse discrimination" is widespread, normalized and even celebrated. Hate towards all men, all white people, specifically white men, straight people, "cis" people, old people, neuro-typical people, able-bodied people.
The progressive narrative is that this type of discrimination doesn't count because these people are magically privileged and powerful. Letting this go completely unchecked is what is fueling a new rise in "traditional" types of discrimination. It's hate fueling more hate.
If you believe in the rights of individuals, it should be based on first principles. All individuals, not just the ones you happen to like.
> If you believe in the rights of individuals, it should be based on first principles. All individuals, not just the ones you happen to like.
You seem to be implying a lot about my political views here. That implication says more about you than it says about me. Growing up in a (bi-)polarized culture must be a pain.
I don't think men should be discriminated against just because they are men (and I am biased, I am a man).
I was just objecting to the idea that being against discrimination seems like a no-brainer. It isn't because the status quo widely allows it, if only it's the correct type.
The thing with discrimination is that the frequency with which it appears is a crucial parameter in the harm it does to people (and we are interested in preventing that harm).
So if I get catcalled as a guy once in 3 years it has a different impact than a girl getting catcalled twice a day.
If someone discriminates me for my white skin once or twice, while I have profited from it my whole life, this is od course wrong, but it won't hurt my prospect in life. If I had a dark skin the racism would be a daily occurance and something that seriously impacts my prospect in life.
So if the actual impact in reality differs, we can also treat those things differently in policy, don't you think?
It upsets me that "companies shouldn't have the same rights as people" is apparently controversial in the US.
A "company" is explicitly an organization that we throw a handful of benefits and handouts to as a society, like limited liability and apparently the unwillingness to punish you for crimes. Surely it's justified that this legal fiction should be held to some standards that you couldn't necessarily hold a person to.
Hell, if you believe "Free speech" should apply to businesses, then "False marketing" laws should be unconstitutional! That's insane.
Corporations are made of people and are protected from government tyranny. This is case law.
No one has a right to not be offended. Race, sex, etc are arbitrary things. Why not add weight, hair color, accent, etc to the list?
No - what it means is government will respect those aspects of people and treat all with dignity. But private citizens and the organizations they build are not compelled to that standard.
What you describe is totalitarian, authoritarian government that exists to oppress freedom and liberty.
This is not a real counter argument or a fallacy in itself (slippery slopes do exist). It happened in other countries already (i.e. Germany) so it makes sense to assume this is happening here as well - especially with California.
The real question is: What is the justification to mandate a collection of this data?
The public's interest in knowing what the social media companies are doing to moderate their platforms. It's everything Musk complained of, not knowing about Twitter, prior to his purchasing it. A report would reasonably allow the public to come to its own conclusion whether or not these social media companies are fairly following their policies or not, or if they are targeting specific groups.
It's facially clear that Musk's issue is that he doesn't want it to be revealed that he hasn't been fair, has been arbitrary, and has been boosting the groups he's been alleged to have been boosting, because this will all be inherently reflected in the report. Also likely to be reflected in the report is information that can lead people to the conclusion that Twitter use is down, which he doesn't want to reveal because it will make his decisions look stupid.
Except the entire point and concept of government/states is that most slopes AREN'T slippery and will have a line drawn somewhere.
You can't just saw "slippery slope", you have to provide evidence that there's strong public (in a democracy) and political will to continue down that slope.
Even if you believe abusing Twitter is popular and wanted from democrats (it isn't), it sure as hell isn't popular for the entire nation.
Slippery slope only applies as a critique when the its spurious. In this case the precedent of government required reporting leading to regulation is very well established.
Huh? This requires the actual thought police--you know, the unaccountable social media platforms--to publish their rules. Or would you rather they just continuing applying their own unaccountable and secret moderation with no consequences or oversight?
> Or would you rather they just continuing applying their own unaccountable and secret moderation with no consequences or oversight?
Yes, it is their business. If people don't like that and advertisers, etc don't like that then don't use it or advertise there. The market will determine if it's viable or not. The state has no business policing that.
It's a shock to me that the "free speech absolutists" running X can't be bothered to even put down their policy in writing. Actually, it's not a shock. They don't want it because they're not free speech absolutists and are actually planning on being unaccountable, biased, and corrupt.
We know X is moderating content (so much for "free speech", but that's another issue). What we don't know is on what basis the content is being moderated, what's being moderated, and so forth.
In other words, Elon Musk is flat-out lying when he calls X a "free speech" platform. It is not - and that's not necessarily a bad thing. What is bad is hiding your moderation policies. That's what this law addresses.
Moderation is okay so long as it's done in the open. Everyone is talking about California taking away free speech, what they're really doing is preserving free speech by forcing companies to disclose when speech was moderated. That is serving the public good.
What philosophical, moral, or economical principle are you basing this off of? Because the free market principle does not reign here; it suggests that prices can be discovered on an open market between rational buyers and rational sellers of roughly equal standing. Thats it.
> What philosophical, moral, or economical principle are you basing this off of?
The Enlightenment - free speech and liberty. They are a speech platform and should not be infringed on upon by the government. The people, ergo the market, will determine if they are viable through popular opinion. If they are as vile and off-putting as some people say then people won't use it and they'll either moderate in a way that makes them viable or die.
I mean which way is it? I hear that they are toxic and no one uses it and no one will advertise on it on one hand and on the other hand I hear that it needs to publish their moderation policies and they should not allow speech that certain people don't like.
In other words, all hail our new corporate thought police. The market can do no wrong, right? Sick of getting squashed by big tech? Feel like the market (of billions of dollars of ad money) didn't punish them hard enough? Just launch your own Twitter with that spare $X billion you've got lying around! Free speech for the rich and the advertisers, just like the Enlightenment promised!
You keep getting downvoted because the rest of us kinda watched the internet happen for the past 20 years. Where were you?
Actually all my comments in this thread are at 1 and one of them at 2. It has been all over the place up and down.
And just because a comment is downvoted here doesn’t mean it’s bad or wrong. In fact look at all the engagement it has created. Clearly there is a conversation to be had. Trying to dismiss it with quips like yours probably feel good to type but are meaningless.
By your comments you belong to a class of people that feels entitled to things you never earned or contributed to and would like an authoritarian government to steal it. This is the worst kind of person. Far worse than a fascist.
Ha, I read Ayn Rand 20 years ago, before the last stages of anarcho-capitalism revealed how vapid and destructive that non-philosophy to be.
I don't feel entitled to anything, but I do think you get what you pay for. And America has been paying for a shitty government and has a shitty government with no serious hope of fighting off the endlessly metastasizing corporate ticks that are constantly rent-seeking, penning us in to less and less choice, and then gaslighting the most gullible into thinking that it's the "free market" or whatever.
But yeah, keep thinking about people who want a functioning society as "looters" or whatever. We just want people to pay their taxes, especially rich people who can frickin afford it.
America isn’t perfect but it’s pretty good overall. The government is made up of the people. Complaining about it like it’s the cable company not delivering good enough service is ridiculous. Get in there and make the change you want to see. Get involved.
I’m not Randian but I do agree that it’s problematic how some people believe they’re entitled to the property of others. That they should be included because they exist and support policy that confiscates, mainly out if greed and envy.
It’s mainly people that think they should be amongst the elite and they aren’t. Which to them is proof the country is bad.
And notably, we require certain "truth in advertising" from companies in most democracies, so that they can't lie to you to get your business. These laws have been obscenely weak in the US, for dumb excuse reasons. Making social media platforms open up about their moderation policies so that consumers can make educated decisions about where they want to say what.
"We do/do not moderate Q" is advertising when it comes to public platforms, and they should be forced to abide by their advertising.
Markets don't work well without good oversight. A fundamental method of Capitalism is the information imbalance. This law is helping to keep consumers informed - where otherwise they would not be at all.
Sorry to chime in, but that law just says they have to disclose it, not that it has to follow a certain ideology. That is not thought police IMO. There is no right for social media corps to not have to disclose how they moderate their content. That isn't infringing on their free speech, just like you having to file your taxes isn't infringing on yours. And if there are hate speech laws, they have to of course comply with them.
That being said, tobacco manufacturers have been forced by law to adjust the designs of their packaging in many parts of the world (including showing full scale pictures that cover everything else). I am not a big fan of governments, but to me even that isn't thought police — it is a governments job to weigh different rights against each other. In this case the tobacco manufacturers right to choose their own packaging design has been given lower priority than the health iasues it causes within society. We expect governments to make these weighing deciaions all the time.
Now maybe it is because I am a European, but:
- A company having the right to free speech is a perversion of the purpose of the rights these laws were originally intended to protect. But for the sake of discussion let's assume they shall have that right
- A government could still weigh the rights of X against those whose rights are harmed by the speech of X. Just because you have free speech doesn't mean it trumps any other right in all circumstances, especially if we are talking about the company of one of the richest persons on earth (and quite frankly: a society where this was the case would be very distopic). A government has to weigh the rights of a corporation with the impact that corporation has on society. That is part of why we have governments.
So maybe the point where we disagree is that I literally believe that e.g. a person who has been born with a dark skin has the right not to be discriminated against — and that it is the job of a government to ensure that these rights are not violated by others. Not at all cost of course and naturally that right has to be weighed against other rights, but with great power comes great responsibility. I'd rather live in a nation where the rights of the individual are taken more seriously than those of corporations, but hey, billionairs in a pickle and such.
> So maybe the point where we disagree is that I literally believe that e.g. a person who has been born with a dark skin has the right not to be discriminated against — and that it is the job of a government to ensure that these rights are not violated by others.
You don't have a right to not be offended. If someone wants to write an op-ed about why a certain group of people are awful and someone wants to publish that then so be it. The government's job isn't to make sure someone has hurt feelings.
Everyone else will condemn them and they'll be marginalized for it and life will go on.
Now, if someone wanted to open a public business and not allow a certain group of people allowed in to shop then the government can step in. That's not speech but rather discrimination.
In terms of saying racist things, it is. It’s just words. Despicable words but just words none the less. I’m not saying being offended is wrong but rather you have no right to not be offended.
If someone wants to stand on the street corner and spew racist comment after racist comment it’s their right to do it. They might get beat up or protested or exposed for the community to judge and shun socially, but it’s their right however disputable it is.
It is not. "We make no attempt to moderate content that is not illegal" is 100% compliant report. The law is about providing information to consumers about the systems they participate in. It's literally a transparency law.
> it complies with whatever arbitrary thing the state
Except if they'd like to keep their "anointed" status, from both a legal (platform vs publish) and extrajudicial (third party doctrine, parallel construction)
Security vs Convenience, as always; the security of the privacy of personal data and papers, versus the convenience of the government's interests of national importance, such as, in their minds, maintaining social order.
If it's our government's job to keep the peace, our job to make sure the ends justify the means.
This myth is so stupid. There is no distinction in law. There is only a distinction for "common carrier" which has a MUCH higher legal requirement of basically being physically required for survival, and even then, often not applied to systems which arguably should be a common carrier (internet)
From my perspective, social media censorship is a huge threat to our rights and democracy and these executives decided they were going to use their massive influence to set the bounds of acceptable discourse at the population level. That behavior is the root problem.
I'd prefer the government is hands off to the extent possible, but something has to be done one way or the other. Ideally the platforms would be painfully fined or even people sent to jail for what they did with the Hunter Biden laptop story which AFAIK illegally changed the result of our presidential election.
How do you think a company hires and trains moderators, or develops classifier systems to moderate, without already having documented all of this already?
X: We are a common carrier
Government: You must censor
X: OK, we'll remove content deemed highly offensive
Government: You're not a common carrier now.
X's moderation job is immensely larger than Musk's ego. And yes, I know it's tempting to make the obvious snarky variations on "nuh-uh, his ego is just that large, hurr hurr hurr", but, no, I'm serious, it's plural orders of magnitude larger. Social media networks have entire content moderation regimes in countries Musk does not care about or speak the language of. It's huge. It's arguably the defining moat for these companies. This employs an army of people both directly doing the content management and writing the intent analysis bots and all sorts of people and I'd guestimate, let's see, try to get the orders of magnitude approximately correct here so this is not just a random number of zeros, 0.0000000001% of moderation decisions made somewhere in X are a direct reflection of Musk's ego.
Could you please stop using HN for ideological battle? It's not what this site is for, and destroys what it is for—regardless of what you're for or against.
Your account has been doing this repeatedly and that's not what HN is for.
I have seen hundreds of comments way worse than this post in terms of a political soap box. I'm confused why you have singled out this post, which actually has news articles relevant to the topic.
What makes this comment better than the one we're discussing?
I should add, though, that a moderation reply like "could you please stop" is referencing not just one comment but a pattern of comments by that account. That's an important distinction. If we look at the account's history and see a bad comment as a one-off case, we might let it go (unless it's egregious); but if we look at the history and see that the account has been doing it repeatedly, we're going to ask them to stop because these are the things that really damage HN.
hate speech is understood as all types of expression that incite, promote, spread or justify violence, hatred or discrimination against a person or group of persons, or that denigrates them, by reason of their real or attributed personal characteristics or status such as “race”,[2] colour, language, religion, nationality, national or ethnic origin, age, disability, sex, gender identity and sexual orientation.
It is speech that spreads hate against an intrinsic attribute. It is a codification into law of the concept that there is no justification for society to hate someone for a way they are born, plus some.
It bans both claiming the jews are eating babies and should be killed, and also that religious people are ruining the world and we would be better off if they were dead.
That definition is nonsense for a number of reasons.
The first is that all human behaviour is discriminatory by design: every time you make a choice of any kind, you're discriminating against all the options you didn't select. And the same applies to speech: the expression of any preference represents "hate" against the alternatives.
The second is that it's so generic that virtually any pronouncement can be classified as hate speech. So it's a good legal foundation for censorship.
Best to stick to prosecuting actual crimes, rather than thought- and speech- crimes. Let people say what they will. Sticks and stones...
Speech that isn’t left wing opinions or neutral. Just like movements that aren’t left wing are “threats to democracy,” politicians that aren’t left wing are fascists etc. Ideas and policies aren’t allowed to be right wing, only allowed if they move us further to the left as a nation.
That's a pretty weak response. "Yeah he's playing favorites and oppressing one political side, but you're on that side so you're biased when you call it out"
You’re paraphrasing incorrectly. The portrayal of the X/twitter account as distributing rather than opposing CSAM was intentionally misleading and weaponized. There’s no evidence of bias at twitter from the response.
i think he's just an engineer who wants a platform where people don't get banned for having right-wing opinions. engineers tend to be like this. he's got bigger things where he focuses his attention.
the "musk is an evil extreme nazi" conspiracy theorists are out in full force in this thread
Could you please stop using HN for ideological battle? It's not what this site is for, and destroys what it is for—regardless of what you're for or against.
Your account has been doing this repeatedly and that's not what HN is for.
Wasn't the entire point of the Matt Taibbi "Twitter files" attempt at a news story to expose exactly this, except slanted to make it look like a big liberal conspiracy rather than the good faith operation of a social media platform?
Musk made big loud statements about free speech, etc. and then ended up arriving back at the status quo. Turns out there never was a problem to begin with, and you actually need the TOS (you know, the reason Babylon Bee, Jordan Peterson, and Trump were correctly banned) because a toxic platform drives away users and advertisers.
the lawsuit, as perceived from x's viewpoint, is forcing the company to have regulations on "hatespeech". which from x's viewpoint then becomes a government control mechanism for censorship
> slanted to make it look like a big liberal conspiracy rather than the good faith operation of a social media platform
good faith is when one can steelman the other point of view.
those files revealed the government was stepping over lines to use censorious mechanisms through the private sector, the conspiracy seems to be on the other side of the aisle. and the lawsuit is over the same principles.
>the lawsuit, as perceived from x's viewpoint, is forcing the company to have regulations on "hatespeech"
But it doesn't. This law only requires that a company's moderation policies be public. The company already has these policies internally, regardless of whether they follow them, and there is no room nor justification in the law for punishing someone for "not following" said policies, nor is such a concept even defined.
This law has nothing to do with "punishing" Twitter, as it literally cannot, and more to do with making Google, Facebook, Twitter, and Bytedance be transparent about what content they disallow on their platform.
You know the once a month post here about "Google killed my email account of 10 years for no reason and I can't contact anyone to ask wtf or get it fixed"? This law is the first step in combating THAT
You're no more confused than Musk himself. The descent into internet crankery is really distressing. Building companies to manufacture cars and rockets are objective problems with objective solutions, and he does well there. Engaging in a crusade to expose the "Woke Mind Virus" or whatever just leads to irrational nonsense like this.
The US Federal government, Justice Department, and FBI were involved in censorship campaigns targeting specific individuals. You’re omitting this part.
They still are. Why do you think that has changed? Musk has complied with more demands from government since taking over than the previous leadership did.
The US Federal government, Justice Department, and FBI are currently involved in every substantial American business. Did Matt think he was scooping Snowden 9 years after the fact?
To be precise, 3 Republican appointed Federalist Society judges ruled the Biden administration "likely violated" the First Amendment. The judiciary has been politicized to the point that we can't pretend these people are nonpartisan anymore. They're politicians in robes with a political agenda.
And if they had voted the other way would you be cheering that they’d voted the right way? By what metric are you judging the ruling other than whether it agrees with your political ideas going in?
Maybe they are politicians in robes. But what does a non politicized judicairy look like to you beyond voting your way?
How do you know I'm not cheering about the way they decided? I'm very happy to keep the government from violating the first amendment. But let's not pretend there aren't billionaires paying for exactly these decisions to come down. If we are going to trust the process, the process can't be corrupted like this.
In this case every member of the panel is a contributor to the Federalist Society, which is knee deep in dark money with the explicit political goal of reshaping the judiciary in its image. I'm fine with right-leaning decisions, but these judges just should not be involved in something so explicitly political. It taints everything they do.
there is a reason twitter now reports 'user minutes' or whatever and no longer reports DAU. Because their active users are plummeting. If 1000 bots spend all day on twitter thats a lot of 'user minutes' but doesn't do shit for selling ads and the health of the platform. The ship is already sunk, stop trying to bail the water out.
For people saying, “oh this is just a reporting requirement and fear mongering over this law is a slippery-slope fallacy”: it would be, except the people behind the law have been repeatedly documented as wanting to jump headfirst down this slippery slope:
> if social media companies are forced to disclose what they do in this regard [i.e., how they moderate online content], it may pressure them to become better corporate citizens by doing more to eliminate hate speech and disinformation.
> [T]he Legislature also considered that, by requiring greater transparency about platforms’ content-moderation rules and decisions, AB 587 may result in public pressure on social media companies to ‘become better corporate citizens by doing more to eliminate hate speech and disinformation’ on their platforms. . . . This, too, is a substantial state interest.
> important first step in protecting our democracy from the dangerously divisive content that has become all too common on social media.
Note that “hate speech” and “disinformation” are not legal concepts and both include various protected speech. This is your government arguing in the open how this law in fact is a great first step towards restricting some of this protected speech.
I'm continually amazed how much people fall for language manipulation."Hate speech" is almost always speech the powerful want silenced. And, what kind of speech do the powerful censor?
* The truth whenever it is inconvenient to them
* speech that threatens their power
* speech that hampers their agendas
Just about everyone who decries "hate speech" is actually perfectly fine with it, as long as it is directed towards the people they themselves hate. Here is one example: https://twitter.com/MrAndyNgo/status/1523476586330136576
The offender, Caroline Reilly, called for literal genocide on Twitter, and was never censored! You can't get more hateful than that. I wish I knew how to wake people up to the fact that "hate speech" laws are nothing but a power grab for the already-powerful.
> "In its complaint, X Corp. argued that AB 587 violates the First Amendment..."
How in the hell can a corporation claim that it has free speech?
While we're at it, maybe we should also find corporations liable for damages done on the communities they serve, with the chief executives being the proxy humans for those damages, including murder (ahem chemical, oil companies)? That would be highly satisfactory.
If a corporation doesn't have speech, nonprofits like museums and schools and abortion clinics and lobbyists wouldn't be able to say stuff without censorship. (The press and churches are separately protected, but the 1A applies more broadly to organizations other than them via corporate personhood).
That said, I think Citizens United (money is also speech, go ahead and corrupt the political process even more with dirty invisible money) is pretty fucked up, and yes, corporate liabilities should also include executive criminal prosecutions for crimes against society (and humanity!).
Freedom of speech should only be given to those people and/or collections of people the parent already agrees with, of course.
And only the good guys should have guns (as determined by an Expert’s “mental health” assessment).
And of course the state is free to seize funds from criminals.
Oh and while “our guys” are in federal power we’d better create a bunch of new rules and regulations, that way we can coerce all those states filled with idiots into our hightened moral ground.
Corporations are not just collections of people. If that's all they were you wouldn't have to incorporate and there would be no concept of "piercing the corporate veil".
Because they are not "just collections of people", corporations are a legal fiction meant to be a handout to a group of people for investing in the economy.
It's insane. If you just get five people in a group and start selling lemonade that hurts someone, you do not get limited liability protection! You only do that if you file some legal paperwork. That legal entity, separate from any of the individuals involved, is what a "Corporation" is.
> How in the hell can a corporation claim that it has free speech?
"Corporations are people" when it comes to rights and privileges, but when it comes to damages and criminality, they are vague, amorphous entities that cannot possibly be held liable for anything. Very convenient for them.
> How in the hell can a corporation claim that it has free speech?
Barring superseding reasons to the contrary, in general US legal structure ascribes to corporations the same rights ascribed to individuals. This is a new construct (about 1970s) but it is the law of the land as interpreted by the Courts. https://constitution.findlaw.com/amendment1/freedom-of-speec...
> How in the hell can a corporation claim that it has free speech?
14th amendment.
> No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.
>> How in the hell can a corporation claim that it has free speech?
Because they do. It is written in the constitution.
"Congress shall make no law ... abridging the freedom of speech"
It doesn't say only for natural persons. Nobody has ever interpreted it that way. The constitution limits government power, and this particular limitation is right up at the top for good reason.
As a non-US, I sometimes get the feeling that for everything that a citizen / company does not like, he / it can just cite the first ammendment. It feels like half of a "get out of jail card" in monopoly. I do not know all the content of the first ammendment, but there must be a lot of text in.
It's short but dense: Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
Because of all those 'or's it outlines quite a wide swath of protected activities. Because of the 14th amendment, it directly applies to not only the federal government, but also state governments.
> Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
That's it. But it's caused centuries of back and forth arguments and mountains of case law and supreme court opinions.
The US has a sort of Stockholm syndrome relationship with our Constitution... it's really hard to interpret or change and basically it's read however a given generation of politicized judges wants it to be. A decade or two later that will change, somewhat, and then be reversed again. Public will has little impact on it, and the supremes have no accountability. It's a mess.
We worship it as sacred but it creates a lot of problems in modern society the the ancients didn't foresee. It's an entirely undemocratic piece of paper holding the country and its future hostage, IMO.
Ok genuine question there: are companies considered the same as people when it comes to US Constitution? Does a company have free speech and the right to bear arms?