I really like your articulation of this. It has struck me for a while now that the people working on friendly AI could have more impact if they invested their energy and intellectual prowess on the real threats to humanity we currently face rather than an amorphous future hypothetical we are ill prepared to understand.
I also like the alternative problem that you posit: what can we do about existing semi-autonomous systems whose functioning is harmful to humanity? However, I don't feel very hopeful about any sort of solutions we might have to these problems. Semi-autonomous systems at societal level have existed for probably nearly as long as as humanity itself. Are there some good examples of such systems that were initially harmful but have been carefully cultivated by humanity to make their effects largely beneficial?
I also like the alternative problem that you posit: what can we do about existing semi-autonomous systems whose functioning is harmful to humanity? However, I don't feel very hopeful about any sort of solutions we might have to these problems. Semi-autonomous systems at societal level have existed for probably nearly as long as as humanity itself. Are there some good examples of such systems that were initially harmful but have been carefully cultivated by humanity to make their effects largely beneficial?