Yes, but observe how that for all three of the things that immediately came to your mind, you have respectively 1. a thing that still has a lot of scams in it (though it may be the best of the three) [1] 2. A thing so full of scams and fake products that using it is already a minefield (one my mother-in-law is already incapable of navigating successfully, based on the number of shirts my family has gotten with lazy-AI-generated art [2]) and 3. a thing well known for generating false statements and incorrect conclusions.
I'm actually somewhat less critical of Apple/Google/Facebook/etc. than probably most readers would be, on the grounds that it simply isn't possible to build a "walled garden" at the scale of the entire internet. It is not possible for Big Tech to exclude scammers. The scammers collectively are firing more brain power at the problem than even Big Tech can afford to, and the game theory analysis is not entirely unlike my efforts to keep my cat off my kitchen counter... it doesn't matter how diligent I am, the 5% of the time the cat gets up there and finds a tasty morsel of shredded cheese or licks some dribble of something tasty barely large enough for me to notice but constitutes a nice snack with a taste explosion for the much-smaller cat means I'm never going to win this fight. The cat has all day. I'm doing dozens of other things.
There's no way to build a safe space that retains the current size and structure of the current internet. The scammers will always be able to overpower what the walled garden can bring to bear because they're so many of them and they have at least an order of magnitude more resources... and I'm being very conservative, I think I could safely say 2 and I wouldn't be really all that surprised if the omniscient narrator could tell us it's already over 3.
[2]: To forstall any AI debate, let me underline the word "lazy" in the footnote here. Most recently we received a shirt with a very large cobra on it, and the cobra has at least three pupils in each eye (depending on how you count) and some very eye-watering geometry for the sclera between it. Quite unpleasant to look at. What we're getting down the pipeline now is from some now very out-of-date models.
I don’t accept the excuse it’s too hard. If they have to spend $10 billion per year to maintain an acceptable level trust on their platforms then so be it. It’s the cost of doing business. If I went into a mall and opened up a fake Wells Fargo bank branch it would be shut down pretty instantly by human intervention. These are the conditions most businesses run under. Why should these platforms given such leeway just because ‘it’s hard’? Size and scale shouldn’t be an excuse. If its not viable to prevent fraud then they don’t have a viable business.
Yes, it's not that it's impossible, it's that it's impossible while operating how they want to operate, scaling as much as they want to scale, and profiting as much as they want to profit. But no business model that can't be pursued ethically and profitably should be execused as simply inevitably unethical. It should be regulated and/or banned.
YouTube regularly shows me ads that fit that analogy quite well. The ECB and Elon Musk take turns offering me guaranteed monthly deposits in my account for one time 200 and 400 euro fees. The deep fakes are intentionally bad enough to filter for good victims.
You don't even need a human to review these ads but inserting one wouldn't be expensive.
But what actually is an acceptable level of trust? Acceptable for whom? For the billionaires, it's good enough if outside is worse, or even if it merely appears worse.
> It is not possible for Big Tech to exclude scammers
It's 100% possible. It might not be profitable
An app store doesn't have the "The optimum amount of fraud is not zero" problem. Preventing fraudulent apps is not a probability problem, you can actually continuously improve your capability without also blocking "good" apps accidentally.
Meanwhile, apple regularly stymies developers trying to release updates to already working and used by many apps for random things.
And despite that, they let through clear and obvious scams like a "Lastpass" app not made by Lastpass. That's just unacceptable. Anything with a trademark should never be possible to get a scam through. There's no excuse.
> Preventing fraudulent apps is not a probability problem
Unfortunately it is. You've even provided examples of a false positive and a false negative. Every discrimination process is going to have those at some rate. It might become very expensive for developers to go through higher levels of verification.
No, it's already a solved problem. For instance newspapers moderate and approve all content that they print. While some bad actors may be able to sneak scams in through classifieds, the local community has a direct way to contact the moderators and provide feedback.
The answer is that it just takes a lot of people. What if no content could appear on Facebook until it passed a human moderation process?
As the above poster said, this is not profitable which is why they don't do it. Instead they complain about how hard it is to do programmatically and keep promising they will get it working soon.
A well functioning society would censure them. We should say that they're not allowed to operate in this broken way until they solve the problem. Fix first.
Big tech knows this which is why they are suddenly so politically active. They reap billions in profit by dumping the negative externalities onto society. They're extracting that value at a cost to all of us. The only hope they have to keep operating this way is to forestall regulation.
> The answer is that it just takes a lot of people.
The more of those people you hire, the higher the chance that a bad actor will slip through and push malicious things through for a fee. If the scammer has a good enough system, they'll do this one time with one person and then move on to the next one, so now you need to verify that all your verifiers are in fact perfect in their adherence to the rules. Now you need a verification system for your verification system, which will eventually need a verification system^3 for the verification system^2, ad infinitum.
This is simply not true in every single domain. The fact people think tech is different doesn't mean it necessarily is. It might just mean they want to believe it's different.
At the end of the day, I can't make an ad and put it on a billboard pretending to be JP Morgan and Chase. I just can't.
> This is simply not true in every single domain. The fact people think tech is different doesn't mean it necessarily is. It might just mean they want to believe it's different.
Worldwide and over history, this behaviour has been observed in elections (gerrymandering), police forces (investigating complaints against themselves), regulatory bodies (Boeing staff helping the FAA decide how airworthy Boeing planes are), academia (who decides what gets into prestigious journals), newspapers (who owns them, who funds them with advertisements, who regulates them), and broadcasts (ditto).
> At the end of the day, I can't make an ad and put it on a billboard pretending to be JP Morgan and Chase. I just can't.
JP Morgan and Chase would sue you after the fact if they didn't like it.
Unless the owners of the billboard already had a direct relationship with JP Morgan and Chase, they wouldn't have much of a way to tell in advance. If they do already have a relationship with JP Morgan and Chase, they may deny the use of the billboard for legal adverts that are critical of JP Morgan and Chase and their business interests.
The same applies to web ads, the primary difference being each ad is bid on in the first blink of an eye of the page opening in your browser, and this makes it hard to gather evidence.
> The more of those people you hire, the higher the chance that a bad actor will slip through and push malicious things through for a fee.
Again, the newspaper model already solves this. Moderation should be highly localized, from the communities for which they are moderating the content. That maximizes the chance that the moderator's values will align with the community. Small groups are harder to hide bad actors, especially when you can be named and shamed by people that you see every day. Managers and their coworkers and the community itself are the "verifiers."
Again, this model has worked since the beginning of time and it's 1000x better than what FB has now.
> What if no content could appear on Facebook until it passed a human moderation process?
While I'd be just fine with Meta, X etc. (even YouTube, LinkedIn, and GitHub!) shutting down because the cost of following the law turned out to be too expensive, what you suggest here also has both false positives and false negatives.
False negatives: Polari (and other cants) existed to sneak past humans.
False positives: humans frequently misunderstand innocent uses of jargon as signs of malfeasance, e.g. vague memories of a screenshot from ages ago where someone accidentally opened the web browser's dev console while on Facebook, saw messages about "child elements" being "killed", freaked out.
> The answer is that it just takes a lot of people. What if no content could appear on Facebook until it passed a human moderation process?
A lot of people = a lot of cost. That would probably settle out lower than the old classified ads, but paying even a dollar per Facebook post would be a radically different use than the present situation.
And of course you'd end up with a ban of some sort on all smaller forums and BBS that couldn't maintain compliance requirements.
You’re right that there will always be some false positives and negatives. At the same time, I do think that if Apple really spent money and effort they could prevent most of the “obvious” scams (e.g. fake LastPass), which make up the majority, without passing the cost onto developers and while minimally affecting their profits.
No, sorry. It's eminently reasonable to ask or demand that a business to reduce its (fantastic) margins/profits in order to remain a prosocial citizen in the marketplace. In fact we do this all the time with things like "regulations".
It may be unreasonable to demand that a small business tackle a global problem at the expense of its survival. But we are not talking about small or unprofitable business. We are talking about Meta, Alphabet, Apple, Amazon. Companies with more money than they know what to do with. These global companies need to funnel some % of their massive profits into tackling the global problems that their products have to some degree created.
> To forstall any AI debate, let me underline the word "lazy" in the footnote here. Most recently we received a shirt with a very large cobra on it, and the cobra has at least three pupils in each eye (depending on how you count) and some very eye-watering geometry for the sclera between it.
Okay, but if it matches the illustration on the storefront, can it really be called a scam?
Fair, I was sloppy there. The cobra isn't a scam itself, it's just a demonstration that it's already a hard place to navigate what with everything that is going on there. A deluge of AI garbage may not be a "scam" in the strictest sense of the term but it still breaks certain unspoken expectations the Boomer generation has about goods and what exactly it is you are buying.
We have also received a number of shirts where AI has been used to create unlicensed NFL shirts and other such actual frauds. And whatever your feeling about IP laws, it was definitely low quality stuff... looked good if you just glanced at it but when you went to look at any particular detail of the shirt it was AI garbage. (I say "AI garbage" precisely because not all stuff from AI is necessarily garbage... but this was.)
> it still breaks certain unspoken expectations the Boomer generation has about goods and what exactly it is you are buying.
Sigh. I learned from my pre-boomer parents that if the product were any good it wouldn't need to be advertised.
> looked good if you just glanced at it but when you went to look at any particular detail of the shirt it was AI garbage.
To be fair, that was also all over the place before "AI" as currently understood. (And I don't think that previous iterations of machine learning techniques were involved.)
I'm actually somewhat less critical of Apple/Google/Facebook/etc. than probably most readers would be, on the grounds that it simply isn't possible to build a "walled garden" at the scale of the entire internet. It is not possible for Big Tech to exclude scammers. The scammers collectively are firing more brain power at the problem than even Big Tech can afford to, and the game theory analysis is not entirely unlike my efforts to keep my cat off my kitchen counter... it doesn't matter how diligent I am, the 5% of the time the cat gets up there and finds a tasty morsel of shredded cheese or licks some dribble of something tasty barely large enough for me to notice but constitutes a nice snack with a taste explosion for the much-smaller cat means I'm never going to win this fight. The cat has all day. I'm doing dozens of other things.
There's no way to build a safe space that retains the current size and structure of the current internet. The scammers will always be able to overpower what the walled garden can bring to bear because they're so many of them and they have at least an order of magnitude more resources... and I'm being very conservative, I think I could safely say 2 and I wouldn't be really all that surprised if the omniscient narrator could tell us it's already over 3.
[1]: https://9to5mac.com/2025/09/25/new-study-shows-massive-spike...
[2]: To forstall any AI debate, let me underline the word "lazy" in the footnote here. Most recently we received a shirt with a very large cobra on it, and the cobra has at least three pupils in each eye (depending on how you count) and some very eye-watering geometry for the sclera between it. Quite unpleasant to look at. What we're getting down the pipeline now is from some now very out-of-date models.