Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is now a Streisand effect - as people see that Google and Microsoft try to hide information from them

This comment section is wild.

The videos are up. Microsoft and Google weren't meeting in secret backrooms to censor this one channel. The most likely explanation is that a competing channel was trying to move their own videos up in the rankings by mass-reporting other videos on the topic.

It's a growing problem on social media platforms: Cutthroat channels or influencers will use alt accounts or even paid services to report their competition. They know that with enough reports in a short period of time they can get the content removed for a while, which creates a window for their own content to get more views.

The clue is the "risk of physical harm". People who abuse the report function know that the report options involving physical harm, violence, or suicide are the quickest way to get content taken down.



A tale as old as time. A long time ago I worked in DDoS prevention and the bulk of our first customers were competing gambling sites and online eyeglass retailers.

Why? Because they were all paying people to DDoS each other. Kinda silly, but good for business.


[flagged]


As I have limited dealings with eyeglass retailers, why ?


Selling a 2$ piece of metal wire and a 15$ lens of polycarbonate for obscene markups.

It’s also a monopoly, luxottica owns practically all brands and dictates the prices.


> 2$ and 15$

That estimate is way to high. More like 90 eurocents (~$1) for the whole thing, assembled. That's retail price:

https://www.action.com/de-de/search/?q=lesebrillen


I never thought I’d live to see the day a link to Action got posted on HN but alas, it has arrived! Show those Dollar General losers across the pond how it’s really done


Action is awesome. Shopping there you quickly realize that almost everyone (except Action) is selling junk from China that they bought for pennies at huge markups.


lol. Available languages include 4 kinds of Dutch/German, 3 kinds of French, 2 kinds of Netherlands, 2 kinds of Swiss, 1 kind of Spanish, and no English. Really defined their market, I suppose.


Seems to cover most of EU languages. So that seems to be the market. Though switzerland is not part of the EU. But the no english option is weird as hell, as english is more and more lingua franca in europe.

(swiss is not its own language btw. but italian, german and french)


Zenni Optical is the antidote. Even my too-blind-for-Zeiss-VisionPro-inserts prescription in the highest refractive index lenses is under $100 for a full pair.


The landscape is a little different now than then.

At the time(and really, even now) people would get their eyeglasses from their local provider. Who cares, insurance probably covers some or all of it. Even getting contacts or glasses as a prescription was like pulling teeth, since they wanted to keep it in house.

So the new market of get your prescription, then buy online was born. And it was like the wild west, not full of eye care professionals but...mostly less than above board places all fighting for your click.

Think about it...if you finally decide to Google eyeglass frames or such, you were entering a whole new realm. And why fight over SEO, when you can just take your competition offline, as most people will click a link, watch it load for 5s, then click back and try the next.

I have no idea if the industry is still shady or not, but 20 years ago, it was full of nothing but bad actors.

I don't know if it matters at all to the conversation or not, but none of the actors(gambling or eyeglasses) were based in the US, despite their domain names and courting US customers. The DDoS company was based in the US.


> Who cares, insurance probably covers some or all of it

Exactly, this is why vision "insurance" is basically a scam, supported only by US tax laws that enable employers to offer vision "insurance" tax-free, while people buying their own eyeglasses have to pay with after-tax dollars.

Except where insanely inflated, glasses cost at most tens of dollars. Certainly not the kind of thing one needs insurance to cover.


Just a reminder that "insurance covers" doesn't mean society doesn't pay for it: all the people pay for it. The insurance company paying causes prices raises for everyone else, same country and abroad. So the whole world ends up paying more for that "insurance coverage": more for the product, more for the insurance, more in taxes that fund public free healthcare...


> Microsoft and Google weren't meeting in secret backrooms to censor this one channel

That's not the argument IMO. They don't have to be intentionally malicious in each action. A drunk driver doesn't want to kill a little girl in the road. Their prior choices shape the outcome of their later options. A drunk driver decides to get behind the wheel after drinking. A large company makes a decision to make more profit knowing there are repercussions and calculating the risk.


The DMCA laws prescribe the process. Google (or any other party) isn’t allowed to decide for themselves what is or is not a valid DMCA complaint.

Complain to Congress, they’re the ones who set this up to work this way.


This isn't a copyright issue. DMCA doesn't apply.


DMCA covers circumvention of protection measures.


That's Section 1201. The takedown bit is Section 512. They're two different things.

It's also not clear how an informational YouTube video would be either a circumvention tool or an act of circumvention if nothing in the video itself is infringing.


False DMCA claims are commonly used to take down videos like this.


And were not in this case.


[flagged]


Read the article.


[flagged]


> They don’t even attribute their quotes, and there is no screenshots I can see of these supposed notices either.

You'd see them, if you read the article. Look for the big image with the caption saying "Source:".

I should warn you that you'll have to make it through seven (7) sentences of text before you get there.

As a side note, not a single word of your comment just now is true. Did you think no one would notice?


> You'd see them, if you read the article. Look for the big image with the caption saying "Source:".

Please don't don't sneer at fellow community members on HN. https://news.ycombinator.com/newsguidelines.html


[flagged]


It's the fourth sentence in the piece.

>Two weeks ago, Rich had posted a video on installing Windows 11 25H2 with a local account. YouTube removed it, saying that it was "encouraging dangerous or illegal activities that risk serious physical harm or death."

Stop embarrassing yourself.


Have you?


> they’re the ones who set this up to work this way.

Who lobbied for it to work that way? I'm assuming google aren't entirely innocent here.


The DMCA is from 1998. I don’t think Larry and Sergei were taking a break from inventing google so they could lobby congress from their Stanford dorm room.


From what I remember Google fought against DMCA abuse by media companies and lost.


Google had only been founded a month before, I don't think they had vast lobbying powers yet!


> They know that with enough reports in a short period of time they can get the content removed for a while

This can be accomplished with bogus dmca notices too. Since google gets such a high volume of notices the default action is just to shoot first and ask questions later. Alarmingly, there are 0 consequences (financial or legal) for sending bogus dmca notices


Action against DMCA abusers has happened in a few instances, but it's still largely an unsolved problem without sufficient deterrence from abuse.

https://techhq.com/news/dmca-takedown-notices-case-in-califo...


it is a weapon the music industry wanted, but now has this unintended consequence.

I think it's high time google stopped acting as judge jury and executioner in the court of copyright enforcement.


The law says they have to.


The law also says a counter claim can be immediately filed. Google don’t follow that part.


Google has nothing to do with filing a counter claim except accepting it if filed. The content owner is the only one who is allowed to file it.


Google do not immediately reinstate on counter claims.


They shouldn’t, because the original claimant has 10-14 days (depending on exact timing) to sue. If they don’t, they reinstate. Which considering many other folks it can take 6 months…

[https://copyrightalliance.org/education/copyright-law-explai...]

Not saying Google is good or anything, but this is well trod ground at this point.


So Google is between a rock and a hard place here.

If they don't react quickly and decisively to reports of "possible physical harm", even if the reports seem unfounded, they'll eventually get the NY Times to say that somebody who committed suicide "previously watched a video which has been reported to Youtube multiple times, but no action was taken by Google."


You can act quickly and decisively and also correctly. Take the average number of reports per day times the average length of a reported segment times two, divide effective work hours per day by that number and hire that many people to process reports. Congrats, your average time to resolution is 24 hours.

If that's too expensive, your platform is broken. You need to be able to process user reports. If you can't, rethink what you're doing.


Not saying you’re wrong in this particular instance, but there are all sorts of areas where we accept that harm will occur at scale (e.g. that 40,000 people per year die in motor-vehicle incidents just in the US). How do we determine what is reasonable to expect?


We require auto manufacturers to include certain safety features in their vehicles, to decrease deaths to a socially acceptable level.

The central ill of centralized web platforms is that the US never mandated customer/content SLAs in regulation, even as their size necessitated that as a social good. (I.e. when they became 'too big for alternatives to be alternatives')

It wouldn't be complicated:

   - If you're a platform (host user content) over X revenue...
   - You are required to achieve a minimum SLA for responsiveness
   - You are also required to hit minimum correctness / false positive targets
   - You are also required to implement and facilitate a third-party arbitration mechanism, by which a certified arbitrator (customer's choice) can process a dispute (also with SLAs for responsiveness)
Google, Meta, Apple, Steam, Amazon, etc. could all be better, more effective platforms if they spent more time and money on resolution.

As-is, they invest what current law requires, and we get the current situation.


It’s even worse when you think about what happens when it’s NOT English + NOT mainstream content.

I really wish someone could tell me that either

1) Yes we can make a system that enables functional and effective customer support (because this is what this case is about) no matter the language

2) No we can’t because it’s fundamentally about manpower which can match the context with actual harm.

Whatever I suspect, having any definitive answer to this decides how these problems need to eventually be solved. Which in turn tells us what we should ask and hope for.


> a system that enables functional and effective customer support

I'm not saying that it's humans, but it's humans.

Augmented by technology, but the only currently viable arbitrator of human-generated edge cases is another human.

If a platform can't afford to hire moderation resources to do the job effectively (read: skilled resources in enough quantity to make effective decisions), then it's not a viable business.


> If a platform can't afford to hire moderation resources to do the job effectively (read: skilled resources in enough quantity to make effective decisions), then it's not a viable business.

But, it is viable. Many profitable businesses exist that don't pay for this.

One may instead mean that they want such businesses to be made non viable, in which case we should critically consider which business models that we might currently like other consequences of may be made non viable. For example, will users suddenly need to pay per post? If so, is that worth the trade-off?


Businesses that are profiting off un-paid-for externalities aren't socially sustainable businesses. They're just economic scams that happen to be legal.

Imho, we should do what we can to make sure they're required to pay for those externalities.

Then, they either figure out a way to do that profitably (great! innovation!) or they go under.

But we shouldn't allow them to continue to profit by causing external ills.


> Then, they either figure out a way to do that profitably (great! innovation!) or they go under.

They do figure out how. That's the problem. This stuff is all trade offs.

If you say they have to remove the videos or they're in trouble then they remove the videos even if they shouldn't.

You can come up with some other rule but you can't eliminate the trade off so the choice you're making is how you want to pay. Do you want more illegitimate takedowns or less censorship of whatever you were trying to censor?

If you tried to mandate total perfection then they wouldn't be able to do it and neither would anybody else, and then you don't have video hosting. Which nobody is going to accept.


Given YouTube's profits, I think it's fair to say there's a substantial viable middle ground of much more vigorous (and labor intensive) appeals than what's currently done.

And that requirement can be created by more robust, outcome-defined regulation.


> Given YouTube's profits, I think it's fair to say there's a substantial viable middle ground of much more vigorous (and labor intensive) appeals than what's currently done.

People keep looking at the absolute amount of profit across a massive service and assuming that it means they could afford to do something expensive. But the cost of the expensive thing is proportional to the size of the service, and then they can't, because dividing the profits by the number of hours of video turns into an amount of money that doesn't buy you that.

> And that requirement can be created by more robust, outcome-defined regulation.

What are you proposing exactly?

Outcome-based metrics are the things that often fail the hardest. It's reasonable to have a given level of human review when you have functioning CAPTCHAs on the reporting function to rate limit spam reports, but if you then require that by law and LLMs come around that can both solve CAPTCHAs and auto-generate spam reports to target competitors etc., now your cost of doing human review has gone up by many fold but you're still expected to meet the same outcomes. Then they either have to tune whatever metric you're not forcing them to meet up to draconian hellscape levels to meet the one you're demanding or you're now demanding they do something that nobody knows how to do whatsoever, both of which are unreasonable.

And all of this is because the government doesn't know how to solve the problem either. If you want to prohibit things with a "risk of physical harm" then you have to hire law enforcement to go drink from the fire hose and try to find those things so the perpetrators can be prosecuted. But that's really expensive to do properly so the government wants to fob it off on someone else and then feign indignation when they can't solve it either.


Please explain what kind of magic your solution uses to ensure that reports always come in at a perfectly even pace without any peaks or valleys. Because without that, your proposed approach will not work.


Perhaps the current process becomes the backlog management system. This isn’t an insurmountable problem, were the incentives in place.


They have a history of removing videos that describe things they don't like under the guise of "harm", eg Linus Tech Tips video on De-Googling your life: https://www.youtube.com/watch?v=apdZ7xmytiQ


Google incentivizes takedown vote abuse. 1. 3 Strikes rules for channels 2. Automatic takedown systems based on votes 3. Incentivizing competing channels with ads 4. No verification/limits/punishment of bogus takedown voters and vote bots 5. Lack of democritized, universal takedowns of equivalent content

Does Microsoft unfairly benefit from Google's takedown tirefire? I do not know.

But if I were designing a voting system for takedowns it would be: 1. 1 non-DMCA takedown vote per user per year 2. No takedown votes for accounts less than 1 year old 3. Takedown all equivalent content when a video is voted down. 4. Verification of DMCA ownership before taking down DMCA-protected content.


The problem I see with that attitude is that it's excusing companies with immense profits from having even the tiniest modicum of actual human review for things.


As long as the bad behavior is profitable, platforms aren't going to fix it: https://www.cnbc.com/2025/11/06/meta-reportedly-projected-10...


YouTube claim these were not automated actions. This explicitly rules out "algorithm/LLM makes a stupid mistake" but also seems to rule out "hits a threshold of community reports and gets automatically taken down pending manual review".

Also, it doesn't even need to be collusion between Microsoft and Google, but to pretend like that's never a thing is to be ignorant of history.

Stop defending these big companies for these things. Even if your version of the story is true, the fact they allow their platform to be abused this way is incredibly damaging to content creators trying to spread awareness of issues.

But also, do you seriously think there is a massive amount of competition at the scale of a 330k subscriber channel for people to bother pulling off this kind of attack for two videos on bypassing Windows 11 account and hardware requirements?

Regardless of what happened here, Google is to blame at least for the tools they have made.

As for Microsoft, I don't think there's anything disagreeable with saying that they've tried hard to get people to switch to hardware with their TPM implementation and lying about the reasons. Likewise for forcing Microsoft accounts on people. I am not certain they were involved in this case, but they created the need for this kind of video to exist, so they are also implicated here.


> But also, do you seriously think there is a massive amount of competition at the scale of a 330k subscriber channel for people to bother pulling off this kind of attack for two videos on bypassing Windows 11 account and hardware requirements

Enough to cause this behavior. I don’t know if theres a mathematical or organization law or something, but it seems like theres always a way to abuse review mechanisms for large communities / sites.

Never enough manpower to do review for each case. Or reviews take a long time.


> Never enough manpower to do review for each case.

Manpower at a given salary cost.

All content platforms could throw more money at this problem, hire more / more skilled reviewers, and create better outcomes. Or spend less and get worse.

It's a choice under their control, not an inevitability.


Either that or microsoft and/or google will send someone to my house to Raymond Reddington my ass if I install W11 with only a local account.


Stop making so much sense


The problem here is that companies seems to not be the wiser to such tactics and creators are left holding the bag by such aggression.


Content hosts are damned if they do and damned if they don't. If they take their time and are cautious with reports, people end up swamped with garbage that people complain about. If they try to be quick to clean up the garbage, some clean stuff get caught and people complain.

The only frequent obvious problem I see is Youtube not telling people why their videos get hidden or taken down or down ranked. Long time creators get left in the dark from random big changes to the platform that could be solved with an email.


In the olden days this would simply be solved by... having customer support befitting the size of the company. Of course nowadays that's "inneficient".

We have companies with billions of customers but smaller customer service than a mid-sized retailer from the 90s. Something is not right.


This is the problem.

IME it's especially bad with Admob. They've purposefully kept their email contact option broken for years and the only "help" you can access is from their forum, which is the absolute worst and never provides any meaningful resolutions. It's awful.


Companies listen to small claims lawsuits.


Google, Facebook etc does have support for some customers. If you have a $10m a year advertising account with them I’m sure you’ll have an account manager.

People posting on these sites as content creators aren’t customers.


What became of the old ruse of simply not listening to contents that one find objectionable? Now it needs to be nuked from orbit yesterday to make sure nobody's pure eyes glance at it.


The world got more connected and we all had to suffer the consequences of other people consuming propaganda, so we decided it should be banned, except for the ones who consume it, who decided the same process should be used to ban reality and only allow propaganda.


They are absolutely aware of these sorts of abuses. I'll bet my spleen that it shows up as a line item in the roadmapping docs of their content integrity/T&S teams.

The root problem is twofold: the inability to reliably automate distinguishing "good actor" and "bad actor", and a lack of will to throw serious resources at solving the problem via manual, high precision moderation.


The law doesn’t allow companies to do anything other than what they are doing.


In certain niches, the marginal value of kneecapping the competition exceeds the viable budget for counteracting gaming. It may be a quirk of this reality’s hyperparameters that a UGC media monopoly inevitably suffers from this. Or maybe at a certain point it hits their bottom line and better enforcement is contrived.


Your theory is baseless given that YouTube claimed that decisions weren't automated:

> The platform claimed its "initial actions" (could be either the first takedown or appeal denial, or both) were not the result of automation.


YouTube frequently claims this and are frequently caught lying. (Oh, you really watched this one hour video and reached your decision in an email sent 96 seconds after the appeal was submitted? Yeah okay...)

They'll silently fix the edge case in the OP and never admit it was any kind of algorithmic (OR human) failure.


I'm aware that there's a chance that Google is lying, I'm just pointing out that their comment doesn't make any sense if they believe that Google deserves the benefit of the doubt.


IMO, major hosting providers ought to implement a function to neuter mass-reporting. Some threshold past which each additional report lowers the priority in the stack.


Thank you for the sane reply

People are so quick to assume conspiracy because it is mentally convenient




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: