Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So Google is between a rock and a hard place here.

If they don't react quickly and decisively to reports of "possible physical harm", even if the reports seem unfounded, they'll eventually get the NY Times to say that somebody who committed suicide "previously watched a video which has been reported to Youtube multiple times, but no action was taken by Google."



You can act quickly and decisively and also correctly. Take the average number of reports per day times the average length of a reported segment times two, divide effective work hours per day by that number and hire that many people to process reports. Congrats, your average time to resolution is 24 hours.

If that's too expensive, your platform is broken. You need to be able to process user reports. If you can't, rethink what you're doing.


Not saying you’re wrong in this particular instance, but there are all sorts of areas where we accept that harm will occur at scale (e.g. that 40,000 people per year die in motor-vehicle incidents just in the US). How do we determine what is reasonable to expect?


We require auto manufacturers to include certain safety features in their vehicles, to decrease deaths to a socially acceptable level.

The central ill of centralized web platforms is that the US never mandated customer/content SLAs in regulation, even as their size necessitated that as a social good. (I.e. when they became 'too big for alternatives to be alternatives')

It wouldn't be complicated:

   - If you're a platform (host user content) over X revenue...
   - You are required to achieve a minimum SLA for responsiveness
   - You are also required to hit minimum correctness / false positive targets
   - You are also required to implement and facilitate a third-party arbitration mechanism, by which a certified arbitrator (customer's choice) can process a dispute (also with SLAs for responsiveness)
Google, Meta, Apple, Steam, Amazon, etc. could all be better, more effective platforms if they spent more time and money on resolution.

As-is, they invest what current law requires, and we get the current situation.


It’s even worse when you think about what happens when it’s NOT English + NOT mainstream content.

I really wish someone could tell me that either

1) Yes we can make a system that enables functional and effective customer support (because this is what this case is about) no matter the language

2) No we can’t because it’s fundamentally about manpower which can match the context with actual harm.

Whatever I suspect, having any definitive answer to this decides how these problems need to eventually be solved. Which in turn tells us what we should ask and hope for.


> a system that enables functional and effective customer support

I'm not saying that it's humans, but it's humans.

Augmented by technology, but the only currently viable arbitrator of human-generated edge cases is another human.

If a platform can't afford to hire moderation resources to do the job effectively (read: skilled resources in enough quantity to make effective decisions), then it's not a viable business.


> If a platform can't afford to hire moderation resources to do the job effectively (read: skilled resources in enough quantity to make effective decisions), then it's not a viable business.

But, it is viable. Many profitable businesses exist that don't pay for this.

One may instead mean that they want such businesses to be made non viable, in which case we should critically consider which business models that we might currently like other consequences of may be made non viable. For example, will users suddenly need to pay per post? If so, is that worth the trade-off?


Businesses that are profiting off un-paid-for externalities aren't socially sustainable businesses. They're just economic scams that happen to be legal.

Imho, we should do what we can to make sure they're required to pay for those externalities.

Then, they either figure out a way to do that profitably (great! innovation!) or they go under.

But we shouldn't allow them to continue to profit by causing external ills.


> Then, they either figure out a way to do that profitably (great! innovation!) or they go under.

They do figure out how. That's the problem. This stuff is all trade offs.

If you say they have to remove the videos or they're in trouble then they remove the videos even if they shouldn't.

You can come up with some other rule but you can't eliminate the trade off so the choice you're making is how you want to pay. Do you want more illegitimate takedowns or less censorship of whatever you were trying to censor?

If you tried to mandate total perfection then they wouldn't be able to do it and neither would anybody else, and then you don't have video hosting. Which nobody is going to accept.


Given YouTube's profits, I think it's fair to say there's a substantial viable middle ground of much more vigorous (and labor intensive) appeals than what's currently done.

And that requirement can be created by more robust, outcome-defined regulation.


> Given YouTube's profits, I think it's fair to say there's a substantial viable middle ground of much more vigorous (and labor intensive) appeals than what's currently done.

People keep looking at the absolute amount of profit across a massive service and assuming that it means they could afford to do something expensive. But the cost of the expensive thing is proportional to the size of the service, and then they can't, because dividing the profits by the number of hours of video turns into an amount of money that doesn't buy you that.

> And that requirement can be created by more robust, outcome-defined regulation.

What are you proposing exactly?

Outcome-based metrics are the things that often fail the hardest. It's reasonable to have a given level of human review when you have functioning CAPTCHAs on the reporting function to rate limit spam reports, but if you then require that by law and LLMs come around that can both solve CAPTCHAs and auto-generate spam reports to target competitors etc., now your cost of doing human review has gone up by many fold but you're still expected to meet the same outcomes. Then they either have to tune whatever metric you're not forcing them to meet up to draconian hellscape levels to meet the one you're demanding or you're now demanding they do something that nobody knows how to do whatsoever, both of which are unreasonable.

And all of this is because the government doesn't know how to solve the problem either. If you want to prohibit things with a "risk of physical harm" then you have to hire law enforcement to go drink from the fire hose and try to find those things so the perpetrators can be prosecuted. But that's really expensive to do properly so the government wants to fob it off on someone else and then feign indignation when they can't solve it either.


Please explain what kind of magic your solution uses to ensure that reports always come in at a perfectly even pace without any peaks or valleys. Because without that, your proposed approach will not work.


Perhaps the current process becomes the backlog management system. This isn’t an insurmountable problem, were the incentives in place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: