> But since that's not as captivating instead we see a ton of gnashing of teeth about the ethics of general intelligence, or how we need to regulate the ability to make fake videos, rather than boring things like "let's restrict ML from as many institutional frameworks as possible"
It’s not only not captivating, it’s downright inconvenient. If I’m at a TED talk I don’t want to hear about how ML models (some of which my company has deployed) are causing real world harms __right now__ through automation and black box discrimination. If you read Nick Bostrom’s Superintelligence it spends laughably little time pondering the fact that AI will likely lead to a world of serfs and Trillionaires.
No, people want to hear about how we might get Terminator/Skynet in 30 years if we’re not careful. Note that these problems are already complicated by ill-defined concepts like sentience, consciousness and intelligence, the definitions of which suck all of the oxygen out of the room before practical real-world harms can be discussed.
It’s not only not captivating, it’s downright inconvenient. If I’m at a TED talk I don’t want to hear about how ML models (some of which my company has deployed) are causing real world harms __right now__ through automation and black box discrimination. If you read Nick Bostrom’s Superintelligence it spends laughably little time pondering the fact that AI will likely lead to a world of serfs and Trillionaires.
No, people want to hear about how we might get Terminator/Skynet in 30 years if we’re not careful. Note that these problems are already complicated by ill-defined concepts like sentience, consciousness and intelligence, the definitions of which suck all of the oxygen out of the room before practical real-world harms can be discussed.