Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a comment below about it, evidence is actually contrary to the goal.

If you bring evidence you're introducing a place for a counter attack to apply leverage.

If instead you make completely baseless claims on obvious false pretenses, you've actually made things more difficult to counter because only trivial counterproofs exist, which have to be dismissed to believe the false claim in the first place.

-

Take COVID vaccine deaths for example. Imagine I baselessly say that the medical field is lying and 50% of COVID deaths are actually vaccine complications.

For someone to believe that, they must completely distrust any official numbers on COVID deaths... so once they've fallen for the lie, how do you convince them otherwise? The only counterproofs are the trivial to find sources of data that they already had to dismiss to believe me in the first place. Suddenly I've implanted a self-enforcing lie that entrenches its believers against anyone who isn't in their echo chamber.

The root of all this is straightfoward enough: there is nothing stronger than requiring someone to disbelieve their own beliefs to counter your disinformation. If you add a deepfake, you've added something outside of their belief system to attack, so you're weakening the attempt. People simply do not like to be wrong about things they think they've figured out.



Makes me think of https://en.wikipedia.org/wiki/Big_lie: the use of a lie so colossal that no one would believe that someone "could have the impudence to distort the truth so infamously."

And yes, I do not think adding a deep fake of some big shot saying or doing X would work better than just repeating the lie that they say or do X.


If you used a Biden deepfake you'd more likely want it to be of him tripping on something than admitting allegiance to the lizards.

> Imagine I [...] For someone to believe that, they must completely distrust any [data]

Do you think this is this like a 419 scam where saying something a bit outrageous sorts out the gullible and bypasses the wary or do you think that your claim can somehow hijack a credulous person long enough so that they make that mental recategorization of the data sources and are stuck?


The former gives you a springboard for the latter

The people falling for the obvious nonsense are self filtering just like people falling for obvious 419 scams.

But as you grow the base that believes in your disinformation, you gain real people who are fully convinced of these things, and the effect of that is a force multiplier.

People talk, and if people's self-held beliefs are the strongest reinforcement, the second strongest is those we surround ourselves with. If someone falls for this stuff and starts talking to the spouse, now someone close to them is pushing this agenda. People can start nudging their friends to be more skeptical.

It's not going to be a 1:1 conversion: a lot of people close to them will push back, but remember, this is all based on absolutely no proof, so it can twist itself to fit any box. People can moderate the story to avoid pushback: "Oh you know I'm not an anti-vaxxer... but I did heard that vaccine has a lot of complications", and maybe they connect that to a real article about a myocarditis case, and now maybe they're not pushing my original lie of "50% of deaths", but I've planted an suggestion in a rather moderate person using a chain of gullible people.

And something especially effective about this is the fact that, while the most brazen aspects of disinformation hit less intelligent people hardest (https://news.ku.edu/2020/04/28/study-shows-vulnerable-popula...)

Once you start to make inroads with increasingly better educated groups via the network effect, they tend to not want to believe they're wrong. Highly intelligent people can be more susceptible to some aspects of disinformation in this way: https://www.theguardian.com/books/2019/apr/01/why-smart-peop...

That lends itself to increasingly authoritative figures becoming deeply entrenched in those campaigns, leading to things like... https://wapp.capitol.tn.gov/apps/BillInfo/Default.aspx?BillN...

-

Overall I've said this before, everyone is dreaming of AI dystopias rooted in things like deepfakes putting us in a post-truth era, or AI gaining sentience and deciding it doesn't need humans...

The reality is so much more boring, yet already in progress. We're starting to embed blackbox ML models trained on biased or flawed data into the root of society.

ML already dictates what a large number of people are exposed to via social media. ML is starting to work its way into crime fighting. We gate access to services behind ML models that are allowed to just deny us access. How long before ML is allowed to start messing with credit ratings?

And yet getting models to "explain" their reasoning is a field of study that's completely behind all of these things. You can remove race from a dataset and ML will still gladly start codifying race into its decisions via proxies like zipcodes, after all it has no concept of morality or equality: it's just a giant shredder for data.

Right now a glorified bag of linear regressions is posing much more of an effective danger than T1000s ever will. But since that's not as captivating instead we see a ton of gnashing of teeth about the ethics of general intelligence, or how we need to regulate the ability to make fake videos, rather than boring things like "let's restrict ML from as many institutional frameworks as possible"


> But since that's not as captivating instead we see a ton of gnashing of teeth about the ethics of general intelligence, or how we need to regulate the ability to make fake videos, rather than boring things like "let's restrict ML from as many institutional frameworks as possible"

It’s not only not captivating, it’s downright inconvenient. If I’m at a TED talk I don’t want to hear about how ML models (some of which my company has deployed) are causing real world harms __right now__ through automation and black box discrimination. If you read Nick Bostrom’s Superintelligence it spends laughably little time pondering the fact that AI will likely lead to a world of serfs and Trillionaires.

No, people want to hear about how we might get Terminator/Skynet in 30 years if we’re not careful. Note that these problems are already complicated by ill-defined concepts like sentience, consciousness and intelligence, the definitions of which suck all of the oxygen out of the room before practical real-world harms can be discussed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: