> pilots assume when they fly a plane. Same goes for operating AI; the onus of not using it to kill everyone falls on everyone, not one person.
That’s why we don’t let a random Joe fly a 747, there is extensive training, licensing, etc.
Do you envision the same for operating AI? In the real world you can’t even drive a moped without licence, registration and insurance. Same goes for access to dangerous chemicals. If AI is dangerous, this is the logical conclusion
I envision that the existence of Air Traffic Control won't inherently stop people from using controlled airspace for hostile purposes. We can idealize what conduct looks like but failure of protocol still happens deliberately or by mistake.
The same is going to happen with AI. There will be bad actors, and trying to stop them from using AI for whatever "hostile" purposes it might yield is going to be nigh-impossible.
That’s why we don’t let a random Joe fly a 747, there is extensive training, licensing, etc.
Do you envision the same for operating AI? In the real world you can’t even drive a moped without licence, registration and insurance. Same goes for access to dangerous chemicals. If AI is dangerous, this is the logical conclusion