Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
X Corp vs. Media Matters [pdf] (courtlistener.com)
62 points by minimaxir on Nov 21, 2023 | hide | past | favorite | 97 comments


Not only that, but apparently several state attorney generals are interested in 'investigating' that outlet for saying things Elon didn't like.

Using the power of the state to 'investigate' media you don't like is some real free speech absolutism.


Texas attorney general is investigating a Washington DC based non profit for posting screenshots of a California based social network. That makes sense.


It really is incredible the extent to which the American right has abandoned any pretence of caring about the rule of law. Like, this is mafia state stuff. It wouldn’t fly in many developing world democracies, nevermind developed.


This has to be the easiest possible way to mark & note activist anti-woke judicial systems ever.

The judicial system galvanizing itself into action to defend a once $44B company for... being exposed as posting highly objectionable content next to those who asked & payed to not have that happen for their promotions? The system is defending what here?

Confessed, it seemingly was somewhat extreme ends that Media Matters got up to to make this happen. But the site has been a haven for the worst shit lately, just horrible people who have taken over & loudly proudly usurped the supposedly left-biased site; this seems like a reasonable light that MM have shined, even if it took effort to get the most repugnant vile crap adjacent to sponsors content. MM tried hard to make advert & skum directly adjacent, but the character they revealed seems like what Twitter is & is increasingly overrun by. MM showed the face of modern Twitter accurately.

Being mad over this ill light Twitter allowed on itself is silly but fine. It being legally actionable seems farcical & sad: a nasty bit of spite I don't think journalistic agencies should have to be afraid of. Having multiple various right wing states' attorney generals spring into action to defend the poor unfortunate $44b company that did bad things & got found seems like a horrible abuse of government power to beat up on the media for showing how it is.


(That's called fascism...)


So Twitter is basically admitting this happens in a court document:

> Media Matters therefore resorted to endlessly scrolling and refreshing its unrepresentative, hand-selected feed, generating between 13 and 15 times more advertisements per hour than viewed by the average X user repeating this inauthentic activity until it finally received pages containing the result it wanted: controversial content next to X’s largest advertisers’ paid posts.

Good job, Elon.


Hey everyone, Twitter is literally spying on your every move. If you file a lawsuit vs Twitter, they will scoop up that data and literally use it against you.

The self-own is so real with this one.


Media Matters didn't sue Twitter.

The lesson is, if you do something Twitter doesn't like, they'll scoop up your data and use it to sue you.


If you try and attack a company, that leads to monetary damages, then yes, that company is entitled to use their data for redress. When you open HackerNews, or any website, of course they have all your information in log files about what you opened, when and from where (ip address, browser headers, etc). That should not be surprising.


Oh that really is true isn't it.

I hadn't realized the second self-own after the enormity of the first.


Can you name a major company that wouldn't do this?


Whistleblower laws in general actually.

Ex: If you file a complaint about your boss in good faith, your boss is normally not allowed to retaliate against you.


I didn't realize Media Matters was a Twitter employee.


So 1 out 15 users who browsed twitter for an hour would see the same result?


I would say generally I think the idea of placing ads "next to" or "on" hateful content doesn't really get me upset. If its a list of people you follow with ads and you follow crazy people I don't think it's surprising that would happen. It's also unclear why advertisers should care, given that its personalized. Maybe you could argue if an account is saying hateful stuff, they shouldn't put any ads in their list of tweets if you go directly there?

That said, the Media Matters article starts with quote from X CEO Yaccarino (https://www.mediamatters.org/twitter/x-placing-ads-amazon-nb...) specifically saying they put controls in to prevent it. So it seems fair enough for a journalist to check if those controls work. Clearly, they don't in this case. Or the controls are for another case.

I do think Media Matters could have been clearer about its methodology, since I agree with X that making an account that only follows hate accounts and seeing if it shows ads is not really what the article implies. I don't see how serving ads nearby to hateful content is more objectionable really than just having the hateful content in the first place. The connection to ads seems more like a tactic to try to force them into action, which is more activism than journalism.

However, it's crazy to try to sue them for this. It's not illegal and it is mostly accurate! It's especially unacceptable that these virtue-signaling Republican AGs are trying to capitalize on it. Gross!


[flagged]


Then don't advertise on a website with user generated content


The entire mechanism by which Twitter/X is able to convince advertisers to advertise on their site is by promising their ads will not appear next to vile content.

Because otherwise, you're right, no major brand would risk it.


Most sites at least try to get rid of that stuff, instead of encouraging and promoting it.


> Clearly, they don't in this case. Or the controls are for another case.

I mean the lawsuit (and preceding tweets by Yaccarino) allege that they do indeed work and Media Matters effectively committed a denial of service attack of sorts in order to forcibly cause them to appear.


It's not a denial of service attack for them to scroll more than a typical user. They don't allege it is such an attack either. If X is correct, they certainly went to some effort to make the screenshots and its somewhat of a synthetic test because they only follow brand accounts and hate accounts – but that would be the first way you'd test such a hypothesis and X did in fact serve them the ads next to that content.


So the complaint is that, in order to see if Twitter would place ads next to hate content, Media Matters first followed accounts that post hate and then scrolled down?


No, the complaint is that media matters lied and slandered X by making up a story. MM isn't suing anyone, X is.

Given that MM documented what they did, and plenty of others have confirmed that behavior, it seems hard to see how MM lied or slandered X.


I’m surprised that Musk is going through with this. It seems like this lawsuit would mean that Media Matters would be able to get access to lots of internal information via discovery.


The point is to discourage others from doing the same. It's a classic SLAPP lawsuit.

Some federal circuits (e.g. the 9th) have protections against SLAPP lawsuits. The 5th circuit (where this lawsuit was filed) does not, and the district judge is Reed O'Connor (https://en.wikipedia.org/wiki/Reed_O%27Connor) who's infamous for rulings friendly to conservative causes.


Isn’t this is a jury trial?


If it gets to that point, yes. There are multiple places where the lawsuit can end before a jury enters the picture (dismissal and summary judgement being more prominent examples), and in those cases it's the judge making the ruling.

The insinuation here is that the venue was chosen to get a judge that is less likely to rule against X/Twitter in those earlier stages and make it more likely that later stages of the trial will be reached.


This is the lawsuit that Elon threatened (https://twitter.com/elonmusk/status/1725771191644758037), but am surprised he actually went through with it.


As am I. The lawsuit doesn't doubt the legitimacy of Media Matters's experiment either, instead it just demeans it.

Normally, the lawsuit template against slander is "The other guy was lying, they were malicious (or reckless) about it, and here's how it damaged us".

In this case, they have multiple paragraphs describing Media Matter's procedure, proving that the screenshots are real. That's a very weak case to stand upon IMO.

------------

I think Media Matter's defense is simple. The screenshots were real and we have a reason to believe they're representative of Twitter's engagement process.

How does this pass the bar of recklessness for Media Matters?


> and we have a reason to believe they're representative of Twitter's engagement process.

I'm not even sure they need to argue that, given the article itself doesn't seem to make any direct claims with respect to frequency of occurrence.

Granted, I'm not exactly sure how implication is analyzed in defamation cases, but I don't think whatever implication may be present here is particularly strong, if that.


Media Matter's defense will most likely be filing for dismissal due to improper venue.


Could you elaborate on why? I'd appreciate more perspective on the matter.


I am not a lawyer, but the argument that Media Matters/Eric Hananoki has personal jurisdiction in Texas (Point 19) is...a stretch.

The real reason Elon is suing in Texas is politically motivated: https://twitter.com/elonmusk/status/1726767436618191177

EDIT: Ken White says that a change of venue would be unlikely to succeed: https://www.threads.net/@matthewrigdon/post/Cz42BAtp_X7


Media Matters exists in Washington DC. Twitter exists in Nevada. This court filing is in Texas for some reason, which is really strange.


The ND of Texas is a famously right-wing Federal district - so litigants with marginal cases but causes sympathetic to conservatives "forum shop" to try to get in front of specific judges there.

https://newrepublic.com/article/165730/northern-district-tex...


I'm not. It seems like an absolutely terrible idea with no hope of a good outcome, which is kind of his thing these days.


With his personal trajectory over the past several years I would have been surprised if he didn't.


There's a nonzero chance that just filing the lawsuit will make things worse (discovery, anti-SLAPP countersuit, advertisers seeing the chaos and leaving faster) than doing nothing at all.


Apparently the lawsuit was filed somewhere where the anti-SLAPP stuff doesn't work?

https://www.threads.net/@kenpopehat/post/Cz41iIQL7I5


That is apparently the case. Although one wonders if you can counter a bad venue with another bad venue.


Certainly invites a Streisand Effect.


Because he's all about freedom of speech. /s


[flagged]


Point 5 of the complaint is funny:

> This November alone Media Matters released over twenty articles (and counting) disparaging both X Corp. and Elon Musk—a blatant smear campaign.


Smear and slander are not free speech


They actually are free speech, especially if someone considers themselves a "free speech absolutist".

I keep hearing from these guys that the answer to speech you don't like is more speech. Musk owns one of the most prominent social media sites in the world - he could have front-paged an article explaining why Media Matters was wrong, the steps they took to 'contrive' their result, etc. etc.

But no, after encouragement from some of the worst people in the country, he forum shopped a vexatious lawsuit to get the Federal government to punish a company who's speech Elon didn't like. Extremely revealing.

Edit;

Just to remind everyone - this most recent spat started when one user posted a super antisemitic screed about how the jews have been pushing white-hatred and that they essentially deserve what they get from the "hordes of minorities" -- to which Elon replied, "You have said the actual truth".


[flagged]


Ah yes, famously measured scholar, "LibsofTikTok" weighing in on whether someone claiming Jews are importing "hordes of minorities" due to their "hatred of whites" is antisemtic or not. Meanwhile, antisemitic mass murders used the same rationale to justify their actions and tons of white supremacists celebrated Elon's comments. Very normal.


Quote, directly please, the sentences that you are criticizing.

Also, I have no idea what scholarly has got to do with it. She's an orthodox Jew, she's not going to encourage anti-semitism.


Anyone defending the great replacement bullshit at this point in history should do more reading and less posting, so good luck with that.

https://www.theatlantic.com/ideas/archive/2023/11/elon-musks...


You've still not responded in anything like a rational good-faith manner so I'm done.


You need actual malicious intent - what protects media matters is the same thing that protects most crazed AM broadcasters right? https://en.wikipedia.org/wiki/New_York_Times_Co._v._Sullivan


IIRC, "Reckless" is enough for some cases of slander.

I don't see how this passes the bar of "Reckless" however.


Also, of course, if what was said is _true_, then intent becomes irrelevant.


In the US, and in most countries, it’s not slander if it’s true. For a public figure, the standard is quite high; you’re talking a malicious objective lie. And even _then_ it can be hard to prove, as Naughty Old Mr Car knows very well; remember the Thailand thing?

So, “figure X is a stupid arsehole”: not slander. “Figure X has been sneaking into my kitchen and stealing the milk”: might be slander, depending.


No one is a free speech absolutist if you take absolutist in its literal sense.


The lawsuit argues that X technically did not "place" advertisements next to anti-Semitic and racist material, because the ads shown are driven by user data, and Media Matters chose to follow racist accounts alongside well-known brands.


That seems like a rather... interesting... argument to make. It feels like it weakens the efforts to convince advertisers that their ads are safe from showing up next to objectionable content - it's not our fault your ads show up next to content you don't want them to show up next to, it's all the user's fault!


Presumably the advertisers are concerned about being shown while someone is having a negative reaction to bad content, or about being associated with the bad content.

If an individual user has specifically sought out the bad content though, they presumably don't themselves think it is bad, and so they don't have a negative reaction to it and also they don't have a negative view of an advertiser being associated with it. Is it still a problem for the advertiser? It's at least not a public perception problem anymore, more of an edge case of "do I want to encourage antisemites to shop at Target" kind of thing.


> Is it still a problem for the advertiser?

Yes, because it's being shown in association with the bad content. That was in your first sentence.

>It's at least not a public perception problem anymore, more of an edge case of "do I want to encourage antisemites to shop at Target" kind of thing.

That's a different thing. Twitter isn't selling antisemite-targeted ads to Target... that's the whole point. Twitter likely agreed with Target to not show its ads in relation to negative content like that. If that negative content is all that you see when you go on twitter, and twitter is still serving you ads, then they are serving their advertisers content in association with this negative content.


An interesting perspective. Not entirely sure that's the same line of reasoning advertisers use - they may be more risk-adverse and just go for a blanket "we don't want anything to do with <objectionable thing>" than "we don't want to show up next to <objectionable thing> unless the user likes it"


Yeah I don't think advertisers want their ads shown if the user likes the bad content. It is just way less severe than if it was happening for all users. Also harder for Twitter to notice and fix if organic occurrence is extremely rare.


"If we didn't show ads with any toxic content, some of our users would never see ads at all! You can't expect us to operate a business like that!"


If that's accurate, then I don't see how they will win in court, with that wrong claim.

TECHNICALLY "placement" is a term of art for adtech which (in my experience working near my adtech coworkers and their products) can only be "successful" vs competitor adtech offerings if it is placed with consideration to the user's data when placing. https://www.google.com/search?q=adtech+term+placement

E.g. give the lady a dog advert if she previously expressed interest in dogs.

It seems like X could be proving the defense's case with their own filing.


is that a "The Algorithm is bad" excuse? Because if that's part of Elon Musk's position, this whole thing becomes even more laughable (considering his complaints about the algorithm were supposedly part of the reason he bought Twitter in the first place.)


For context, the article with screenshots of nazi posts alongside ads for major advertisers: https://www.mediamatters.org/twitter/musk-endorses-antisemit...


I was curious and clicked on the link but I don’t understand how the “peace with Hitler” in the bottom left signs would be considered Nazi signs? Those seem more like…anti-Nazi signs warning that there were people who thought peace with Hitler was realistic?

Unless I’m missing something.


It could be read either way, I suppose.

But the user which posted it, Karl Radl, is very much on the "Hitler was right" train of thought.


Yea, I have no context because I've never heard of this person.


As best I can tell, advertisers started pausing ads after Musk commented on an anti-semitic tweet, agreeing with it.

The MediaMatters article seemed to follow after.


The media matters article was the day before that, b it, yeah, it’s fairly clear that advertisers’ decisions were driven more by Naughty Old Mr Car’s misdemeanours. Which makes the whole thing all the sillier.


X seems to be arguing that the report was defamatory because only Media Matters saw ads from 4 companies next to specific pro-Hitler posts, while not presenting an argument that no company has its ads run next to antisemitic content in general.


That is a "black swan" argument from Musk, et al.

Media Matters set up a test account that showed that it was possible for X's algorithm to pair ads with objectionable content. Given that, how can X claim that "no company has its ads run next to anti semitic content in general" when Media Matters has shown that it is possible? Unless Media Matters "photoshopped" or otherwise manufactured the results--but that isn't what the filing claims. They claim that MM set up a few small accounts following only a few other accounts-fringe content and brand advertisers--and then scrolled through the feed until they found something bad.

The line in the filing that mentions that MM's tests used existing accounts to get by new member restrictions shows the fragile nature of X's arguments. Most real-life users are going to have existing accounts, so it's that experience you want to check, not the highly-constrained environment X puts new subscribers into because they don't trust them yet.

Customers like Apple, Comcast, NBCUniversal and IBM, are sophisticated ad buyers that wouldn't let a single story change their buying strategies without additional information/confirmation from X. If they made the choice to leave X, I'd bet that the Media Matters story was the last straw, not the first one. And it's quite possible that the Media Matters story was the result--rather that the cause--of those companies' decision to leave the platform in the first place.

While X is trying to spin this a Media Matters "did bad things" to convince Apple, IBM, Comcast, and NBCUniversal to stop advertising with X, it is far more likely that the highly volatile and bombastic behavior of X over the last year had far more to do with that result than Media Matters' article did.


Good points but do you think if MM said it took us 10,000 clicks to see 1 ad, we had to follow 30 users of objectional content generators and we had to follow the same brands as shown in the ads, that it makes the claims of MM significantly different?


It seems to me that defamation here hinges on exactly what Media Matters said their little experiment shows about X: if they indicated that they were capturing the horrific state of the general user experience with respect to ads and offensive posts, then this was a really malicious lie.

Otherwise, they were just using X in a strange manner, which is not defamatory in itself.


This appears to be the article that is being sued over: https://www.mediamatters.org/twitter/musk-endorses-antisemit...

The wording in dispute appears to be:

> But that [the claim that "brands are now 'protected from the risk of being next to' potentially toxic content."] certainly isn’t the case for at least five major brands: We recently found ads for Apple, Bravo, Oracle, Xfinity, and IBM next to posts that tout Hitler and his Nazi Party on X. Here they are: <screenshots>

Nothing is said about how common/rare this occurrence is nor whether anything specific needs to be done to observe such a result.


And if you are one of those advertisers, how "common" a problem does this have to be to make you think you don't want to advertise there anymore? Even X's filing doesn't claim this "can't happen", just that it doesn't happen frequently.

For CMOs making major ad buys for carefully curated marquis brands like Apple, IBM, et al, I suspect that the only concrete number that they want to be assured of by their advertising platform is "0".


0 is impossible and advertisers realize that, which is why large user-generated content companies set up a Trust and Safety team to rapidly respond to these issues to placate advertisers.

Too bad Elon fired Twitter/X's Trust and Safety team!


What could a "trust and safety" team even do when the site owner/CEO spends his time directly responding to racist diatribes with positive encouragement [0] ? Like you can't pass off the nastiness as just "some users" or otherwise exceptional when it's being directly nurtured by the forum admin. So the resulting question is more like when to stop advertising somewhere the admin seems intent on making into Stormfront Lite?

[0] https://nitter.net/elonmusk/status/1724908287471272299 . I'm including this link even though it's been referenced to death, because reading primary sources is important - especially with people people becoming desensitized to claims of racism from a media landscape that often takes things out of context and heavily paraphrases to blow them out of proportion, which is decidedly not what happened here.


Not in the article that article (written by the defendants) - the lawsuit, following the example of others, explains how they poked and prodded in very unnatural ways, trying to contrive a circumstance in which ads would be shown next to certain posts - and they did find such! X contends no actual users were or ever would be in this same circumstance, so no brand damage was actually done.

It may hinge on exactly how strong the 'protection' is that Yaccarino alluded to is inferred to be - whether it's reasonable to infer she meant that content moderation under Musk was now perfected and 100% hateproof, at least with respect to ads.


Sure, but the issue is that defamation requires a false statement of fact. Elon may not be happy that the article is missing context, but that's not the same thing as claiming something false.

> X contends no actual users were or ever would be in this same circumstance, so no brand damage was actually done.

If that's what they want to contend then this lawsuit is probably not the right vehicle. They'd probably be better off making that argument to their advertisers.


The contention is close, but not exactly that - it would be that when MM said 'Yaccarino was wrong, here's proof', this was defamatory because 'protected' was never meant to imply 100% perfect protection, and therefore claims that her statement was disproven - with their contrived method - are false and malicious.

It may well be a weak case. MM are certainly slimy political operators, but they seem to have mostly avoided any direct statements which are easily, unambiguously provably false.


> it would be that when MM said 'Yaccarino was wrong, here's proof', this was defamatory because 'protected' was never meant to imply 100% perfect protection, and therefore claims that her statement was disproven - with their contrived method - are false and malicious.

I'm honestly not sure how that would be legally analyzed. It doesn't really feel like a very convincing argument, but I don't think I can articulate exactly why.

In any case, it doesn't seem that particular line of argument is present in the complaint, so it's pretty much just a curiosity.


> "brands are now 'protected from the risk of being next to' potentially toxic content."

This is such a Musk thing. Even Musk from a long time ago.

Making exaggerations and big implications about his products is part of his DNA. Its a huge factor in his success.


It's a lot less relevant to find some way to "trick" it into displaying ads in situations like this if users wouldn't normally see this, and it could easily be defamatory if you went on to claim that this was therefore routine or a serious problem.

For example imagine that it takes exploits, URL editing, or something similar to do it. The question here really is how much effort you really need to put in to get it to happen.


> it could easily be defamatory if you went on to claim that this was therefore routine or a serious problem.

Claiming that the problem is "routine" might be problematic, but I think the problem being "serious" may arguably be non-defamatory. A problem being "routine" implies there's a pattern, which can potentially be proven/disproven, but whether a problem is "serious" seems much more opinion-based. One advertiser may not care that their ads have a minuscule chance of showing up next to objectionable content, and another one may care very much that there's a non-zero chance.


I don't see how Musk/X will be successful with this unless in discovery it's revealed that Media Matters produced some of the hateful content through sock puppet accounts.


It's nice to see Musk donating so much attention to Media Matters, which will fundraise off this and probably earn quite a few new supporters.


>Inside Linda Yaccarino’s X all-hands after Elon Musk sued Media Matters: ‘By all means, put your heads together to bring new revenue into the company’

https://fortune.com/2023/11/20/inside-twitter-x-all-hands-af...

Archived: https://archive.is/tkT3c


It's hard to take the anti-elon people on hackernews seriously anymore.

They said twitter was going to fail over nearly everything, and I believed them until I lost money on the betting markets.

It's clear Elon is not dumb or incompetent, I wonder what his real move is here?


Starting to wonder if the non-profit corporate form is actually worth allowing at all. There's just so much manipulative or flat out crazy behavior coming from that sector in recent years, of the stuff that'd be slapped down by the courts or governments very fast if it were done by for-profit entities where people's guard is up. OpenAI, Wikimedia manipulating people into donating, the famously expensive WWF fundraising balls and now this.

Media Matters will hopefully lose their lawsuit, be liable for X's losses and be destroyed. They've clearly been passing off heavily manipulated scenarios to advertisers as if they were representative of the average user's experience, for the sole and explicit purpose of waging left wing ideological war against the only tech company willing to defy them, which is a behavior harmful for society overall especially as it's an attack on a more or less public square. You don't see commercial competitors engaging in such manipulative and aggressive attacks like that much, if at all, probably because they are run by rationally self-interested people who don't want to wreck their organization by starting legal fights over ideology.

Unfortunately even if X does win presumably Media Matters has little money to make amends, and there are hundreds of similar leftist NGOs dotted across the landscape using aggressive manipulation to police what people are allowed to say, many of which are just very thin proxies for the US government (see the Twitter Files for examples of that). This problem isn't specific to X, pretty much any organization that isn't explicitly leftist will find that its advertisers are all being constantly harassed and libelled by NGO activists. The UK's attempt at a conservative TV news channel constantly faces this problem as well, as do any conservative news websites or blogs.

A real fix for this type of behavior will probably require law changes to strip the corporate veil from non-profit board members, such that they become personally liable for legal costs of the non-profit itself. Limited liability is useful to grant to for-profit companies because they often take actual and serious risks with large amounts of capital, so there needs to be some shield against that risk becoming of uncontrollable size. But the NGO sector doesn't risk capital to build anything productive. One of the very few that did was OpenAI and we're seeing how that plays out right now.


Seems like a well grounded complaint. It’s obviously going to be politicized, given politics precipitated things from the start, but there is nothing wrong with letting these matters play out in a court. Remember that Elon has himself been the subject to defamation suits, and he’s won them all. If MM committed deception then likewise they will be held to account as the court determines right, there is justification for the lawsuit as X can prove they were damaged.


[flagged]


Okay, you checked the “public display of tribal allegiance” box. Now that that’s out of the way, isn’t Musk the guy who talks about freedom of speech and how X won’t censor you? Do you really think trolling needs to be shut down by the government rather than answered by more speech?


What do you mean by “public display of tribal allegiance” Who do you reckon I have allegiance to? I don't care about Elon Musk or Twitter. I just don't like Media Matters and David Brock has been a shady guy for decades.


I don't really like them, but there's nothing here that they've done wrong in this instance. It's just objectively true that Twitter has turned into Gab-lite in many corners of the platform. I see Holocaust denial in my FY page all the time, from the far-right accounts and tankie accounts. I recently clicked into a comment section about a Jewish reporter and every comment was "Christ is King." That is something Nick Fuentes started. He's a neo-nazi. Elon also randomly replies to these people spewing this crap with quips like "interesting," etc which pumps it into everyone's feeds.


That. That's exactly what made me dislike Elon Musk. I address this message to him:

Stop whining. Please Elon, stop whining. People don't respect you as much? It doesn't matter, just, stop whining, and stop lying to justify your whining. I don't care that you've done mistakes. You're allowed to as long as you're accountable.

Stop whining. This is becoming old. Organize meetings, art expositions, do some shit, play role-playing games, idk, but please, people do not really care. At most I'll read stuff like that for the drama, but really, live your life and stop whining.


I'm just here with popcorn, why try telling him anything, it's more fun to watch him destroy himself.

As the Germans say, "Schadenfreude ist die schönste Freude" (Schadenfreude is the most beautiful happiness). Yeah it's not a good trait of mine, I'll admit.

And as Napoleon didn't say, "Never interrupt your enemy while he's busy making a mistake" https://quoteinvestigator.com/2010/07/06/never-interfere/


unfortunately when the richest person in the world destroys himself, he can take a lot out with him, and it might take a while.


But the issue is that a part of the population look up to him, an especially young and easy to influence part. And he help make them whiners. He's not the only one, I've observed that guru often do the same thing (or take the complete opposite approach which isn't healthy either).

I'm just tired of the US news having yet again whining as a defensive mechanism. It started with Hillary, then Trump, and now this. I'm just tired of US news I guess.


He’s trying to appeal to the American far-right, now, and they appear to like it when Trump does it. I think this is actually a major cultural difference between the US and Europe that I wasn’t previously aware of; this sort of temper tantrum would, I think, almost universally be seen as a sign of weakness here.

Frankly, I think this is the only reason countries with more plaintiff-friendly defamation laws, like the UK, haven’t been forced to reform them; it is such bad press for public figures to actually use them, particularly in nonsense cases like this, that they mostly don’t.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: