> It is not possible for Big Tech to exclude scammers
It's 100% possible. It might not be profitable
An app store doesn't have the "The optimum amount of fraud is not zero" problem. Preventing fraudulent apps is not a probability problem, you can actually continuously improve your capability without also blocking "good" apps accidentally.
Meanwhile, apple regularly stymies developers trying to release updates to already working and used by many apps for random things.
And despite that, they let through clear and obvious scams like a "Lastpass" app not made by Lastpass. That's just unacceptable. Anything with a trademark should never be possible to get a scam through. There's no excuse.
> Preventing fraudulent apps is not a probability problem
Unfortunately it is. You've even provided examples of a false positive and a false negative. Every discrimination process is going to have those at some rate. It might become very expensive for developers to go through higher levels of verification.
No, it's already a solved problem. For instance newspapers moderate and approve all content that they print. While some bad actors may be able to sneak scams in through classifieds, the local community has a direct way to contact the moderators and provide feedback.
The answer is that it just takes a lot of people. What if no content could appear on Facebook until it passed a human moderation process?
As the above poster said, this is not profitable which is why they don't do it. Instead they complain about how hard it is to do programmatically and keep promising they will get it working soon.
A well functioning society would censure them. We should say that they're not allowed to operate in this broken way until they solve the problem. Fix first.
Big tech knows this which is why they are suddenly so politically active. They reap billions in profit by dumping the negative externalities onto society. They're extracting that value at a cost to all of us. The only hope they have to keep operating this way is to forestall regulation.
> The answer is that it just takes a lot of people.
The more of those people you hire, the higher the chance that a bad actor will slip through and push malicious things through for a fee. If the scammer has a good enough system, they'll do this one time with one person and then move on to the next one, so now you need to verify that all your verifiers are in fact perfect in their adherence to the rules. Now you need a verification system for your verification system, which will eventually need a verification system^3 for the verification system^2, ad infinitum.
This is simply not true in every single domain. The fact people think tech is different doesn't mean it necessarily is. It might just mean they want to believe it's different.
At the end of the day, I can't make an ad and put it on a billboard pretending to be JP Morgan and Chase. I just can't.
> This is simply not true in every single domain. The fact people think tech is different doesn't mean it necessarily is. It might just mean they want to believe it's different.
Worldwide and over history, this behaviour has been observed in elections (gerrymandering), police forces (investigating complaints against themselves), regulatory bodies (Boeing staff helping the FAA decide how airworthy Boeing planes are), academia (who decides what gets into prestigious journals), newspapers (who owns them, who funds them with advertisements, who regulates them), and broadcasts (ditto).
> At the end of the day, I can't make an ad and put it on a billboard pretending to be JP Morgan and Chase. I just can't.
JP Morgan and Chase would sue you after the fact if they didn't like it.
Unless the owners of the billboard already had a direct relationship with JP Morgan and Chase, they wouldn't have much of a way to tell in advance. If they do already have a relationship with JP Morgan and Chase, they may deny the use of the billboard for legal adverts that are critical of JP Morgan and Chase and their business interests.
The same applies to web ads, the primary difference being each ad is bid on in the first blink of an eye of the page opening in your browser, and this makes it hard to gather evidence.
> The more of those people you hire, the higher the chance that a bad actor will slip through and push malicious things through for a fee.
Again, the newspaper model already solves this. Moderation should be highly localized, from the communities for which they are moderating the content. That maximizes the chance that the moderator's values will align with the community. Small groups are harder to hide bad actors, especially when you can be named and shamed by people that you see every day. Managers and their coworkers and the community itself are the "verifiers."
Again, this model has worked since the beginning of time and it's 1000x better than what FB has now.
> What if no content could appear on Facebook until it passed a human moderation process?
While I'd be just fine with Meta, X etc. (even YouTube, LinkedIn, and GitHub!) shutting down because the cost of following the law turned out to be too expensive, what you suggest here also has both false positives and false negatives.
False negatives: Polari (and other cants) existed to sneak past humans.
False positives: humans frequently misunderstand innocent uses of jargon as signs of malfeasance, e.g. vague memories of a screenshot from ages ago where someone accidentally opened the web browser's dev console while on Facebook, saw messages about "child elements" being "killed", freaked out.
> The answer is that it just takes a lot of people. What if no content could appear on Facebook until it passed a human moderation process?
A lot of people = a lot of cost. That would probably settle out lower than the old classified ads, but paying even a dollar per Facebook post would be a radically different use than the present situation.
And of course you'd end up with a ban of some sort on all smaller forums and BBS that couldn't maintain compliance requirements.
You’re right that there will always be some false positives and negatives. At the same time, I do think that if Apple really spent money and effort they could prevent most of the “obvious” scams (e.g. fake LastPass), which make up the majority, without passing the cost onto developers and while minimally affecting their profits.
No, sorry. It's eminently reasonable to ask or demand that a business to reduce its (fantastic) margins/profits in order to remain a prosocial citizen in the marketplace. In fact we do this all the time with things like "regulations".
It may be unreasonable to demand that a small business tackle a global problem at the expense of its survival. But we are not talking about small or unprofitable business. We are talking about Meta, Alphabet, Apple, Amazon. Companies with more money than they know what to do with. These global companies need to funnel some % of their massive profits into tackling the global problems that their products have to some degree created.
It's 100% possible. It might not be profitable
An app store doesn't have the "The optimum amount of fraud is not zero" problem. Preventing fraudulent apps is not a probability problem, you can actually continuously improve your capability without also blocking "good" apps accidentally.
Meanwhile, apple regularly stymies developers trying to release updates to already working and used by many apps for random things.
And despite that, they let through clear and obvious scams like a "Lastpass" app not made by Lastpass. That's just unacceptable. Anything with a trademark should never be possible to get a scam through. There's no excuse.