(To note, I work in machine learning, but not in this specific area and not with Nvidia.)
Who are you quoting in your arguments here? I did not make those arguments and I cannot find someone else in this thread who made those arguments. Perhaps you are creating a strawman to argue against?
Re: Fire: I wasn’t around for it, but it’s safe to assume people discovered fire was dangerous around the same time they discovered fire, wayyyy before cities existed and before humans decided to harness it.
Re: Social harms from AI: you should ask yourself about the harms that can come from automated decision making. We’re automating decisions for policing, for hiring, for delivering posts on social media, for delivering political advertisements on social media, etc. We’re using AI research for profiling, for improving bomb drones, for getting children to spend time and money on games and social media, etc. I’m sure you agree at least one of these are harmful.
Re: Social harm from ‘deepfakes’: It’s currently costly to create a convincing fake image, audio, or video. It’s easy to extrapolate that it will be easy to make convincing fakes in the near future. It’s easy to see that can cause harm, especially since people are already tricked by obvious photoshops and deep-fakes.
In the US at least, there are political attack ads rampant today that use altered media.
I find it difficult to find a generous interpretation of someone who thinks we should proactively shut-down any discussion about potential and current harms from AI.
Who are you quoting in your arguments here? I did not make those arguments and I cannot find someone else in this thread who made those arguments. Perhaps you are creating a strawman to argue against?
Re: Fire: I wasn’t around for it, but it’s safe to assume people discovered fire was dangerous around the same time they discovered fire, wayyyy before cities existed and before humans decided to harness it.
Re: Social harms from AI: you should ask yourself about the harms that can come from automated decision making. We’re automating decisions for policing, for hiring, for delivering posts on social media, for delivering political advertisements on social media, etc. We’re using AI research for profiling, for improving bomb drones, for getting children to spend time and money on games and social media, etc. I’m sure you agree at least one of these are harmful.
Re: Social harm from ‘deepfakes’: It’s currently costly to create a convincing fake image, audio, or video. It’s easy to extrapolate that it will be easy to make convincing fakes in the near future. It’s easy to see that can cause harm, especially since people are already tricked by obvious photoshops and deep-fakes.
In the US at least, there are political attack ads rampant today that use altered media.
I find it difficult to find a generous interpretation of someone who thinks we should proactively shut-down any discussion about potential and current harms from AI.