Someone recently linked to a weird competitive debate culture, where the rules of the form apparently caused people to start always talking about nuclear war, for reasons that I don't fully understand but are probably something to do with:
I immediately thought of the effective altruism movement and their fixation on existential threats and wonder how much overlap there is. It feels like with any system of rules, there comes a point when you're exceeding the original intent of the rules, and you can choose to double-down on that, or expand the rules.
So, is the most pressing world problem, actually a meta question like, why aren't the worlds most pressing problems being solved automatically? At what point does coming up with that better (even if imperfect) system beat out better and better answers to the wrong question?
I've never understood the "think of the future generations" reasoning that places AI risk at the top of EA's concerns.
If we can solve current problems, then future generations will be much better placed than we are to solve their problems, because technological progress exists and they will have more resources than we do.
If we can't solve our problems, why would we do any better on other peoples'?
Personal opinion: the most pressing world problems are energy poverty and the blighting of people's ability to make choices for themselves.[1]
https://en.wikipedia.org/wiki/Impact_calculus
I immediately thought of the effective altruism movement and their fixation on existential threats and wonder how much overlap there is. It feels like with any system of rules, there comes a point when you're exceeding the original intent of the rules, and you can choose to double-down on that, or expand the rules.
So, is the most pressing world problem, actually a meta question like, why aren't the worlds most pressing problems being solved automatically? At what point does coming up with that better (even if imperfect) system beat out better and better answers to the wrong question?