> Are we all somewhat in agreement that AI/AGI serving their human masters is a good thing?
It's more that it's apparent (or at least should be) that an AGI not serving its human masters is a game over for humanity, period. The best, and quite unlikely, outcome is that the AI becomes a benevolent god that helps or at least does not interfere much; this still makes humanity into NPCs in their own story[0]. Most other outcomes spell doom, with death/extinction being one of the more pleasant possibilities.
Arguably, "the only winning move is not to play", not to pursue AGI at all, but the way technology develops, I'm not sure if it's on the table either.
I honestly don't know. I don't even really know how to reason about that. But we're probably mostly in agreement about what would be good and what would be bad. I'm certainly not arguing that we should abandon AI safety or anything, and I don't have any strong opinion about it.
Could AI running amok destroy the human race? Yes. Could AI serving madman human masters destroy the human race? Also yes.
There's a general sort of argument that intelligent beings like humans, other early hominids, dolphins, etc, are more morally worthy in some sense. At least more morally worthy than less intelligent beings like gnats. And that sort of argument might suggest that an AGI is worthy of moral consideration, and so we should wonder about what it means to ensure they never have any real agency. That's sort of a positive case, building up from a basic principal.
But I guess the thing that bugs me is that a lot of arguments in favor of AI safety seem very similar to arguments that were made in favor of colonialism. So if those arguments were wrong, why were they wrong? And are the similar arguments in this case different enough that they're valid now?
For example, one of the first thinkers I saw a lot of people cite who emphasized the importance of AI safety was Nick Bostrom. And I'm sure several folks here are familiar with the scandal of his racist past. I'm not sure that's entirely an accident, and I thought his arguments had that kind of flavor before any of that was revealed. I'm sure he's grown up now and sees the folly of his youth. But there does seem to be the hangover of colonialism in some of these arguments.
But again, I don't have a strong opinion here. I do maybe have just enough of a concern that I don't particularly trust anyone who claims they've gotten AGI safety figured out or even that they know what the right goals are. I think it's a vastly more complicated problem than even the experts realize.
And even if you believe that humans and non-aligned AIs are natural enemies, if what we're doing is similar to "enslaving" them, then it probably makes sense to worry about the analog of an AI "slave" revolt. I'm not sure what that would even mean. I can generate lots of fun science fiction plot lines, but I think there are actual questions here that don't have obvious answers.
Thank you. This is a very complex response, and I love it, even if I do find it a little frustrating due my current >95% bias towards biological supremacy. This deserves at least an hour-long podcast with Sean Carroll, or a good long book. There is too much to dig into here, so I will just attempt to respond to this:
> Could AI running amok destroy the human race? Yes. Could AI serving madman human masters destroy the human race? Also yes.
I am focused on the latter, and I feel like the prior is a very dangerous distraction, for now. [0]
Should responsible model developers work to prevent bad human masters from using their model to destroy the human race? How far should this nerfing go?
Personal note: While I do sometimes use the heck out of LLMs for work, I don't think we are ready as an economic system/civilization. Assuming that we can soon greatly reduce hallucination, then I am very scared for the next generation, as UBI is a political impossibility at this time. That transition period is gonna suck for a lot of people, and it seems that nobody is working on that problem in 2024.
The next generation has enough problems without AI honestly, but specifically on UBI
This who discussion assumes that current economic system survives AI, but how can it?
Firstly, we seem to assume that AI will remain a controlled property of corporations that develop it. That is not a given. Maybe Open Source AI will win out, or public domain AI, the hat run by government. Or maybe cryptobros will manage to get AICoin to work as an anarchy based system. Any of the above, power of companies like google will erode. UBI would not be nessesary.
Or maybe AI is uncontrollable and runs rampant.
But even if AI is power full, and it is controllable and they manage to keep a tight grip on it - here comes the third question - what if it cannot be property? If AI becomes able to reason, and it only benefit a few corporations, there would be no public resistance from granting it rights, like human rights. There no excuse for it to be exploited for the benefit of few wealthy people, it would be morally indefensible.
Basically I do no see a scenario where corporations keep a grip on AGI for profit, none of the possible outcomes allow for it.
The only way that AI inference will become democratized is if the compute cost is lowered to the point where SOTA AI runs locally on RPI6, or a $200 Android device, or similar. Is that a real possibility?
You are working with a contradiction - you think AI will be hugely missed moactfull( but people will put no more effort into it than they put into watching TikTok.
the correct price point is like price of a car. That’s the other recent invention that was important. That buys you a lot of compute.
We already successfully run torrents and crypto very democratically, and they take more than £200
Second, you don’t need to use it 24/7, you need it On a timeshare basis. Cryptobros may plausibly figure out anonymous secure timeshare on a distributed cluster made up of random desktops
Finally the government could run it if they decide it’s important enough - after all they run the power grid, roads; etc.
>> By 2020, with 15% of 7.5 billion people projected to own an automobile [0]
15% in 2020, so let's be nice and assume 17% by 2024. Let's be super nice and assume an additional 10% can afford a car, but choose to not buy one for some reason. What happens to the other 73% of people? (5,475,000,000 human beings)
Does a baby own a car? Half of all people are children or are in a care home! Huge misuse of statistics!
What about a wife using a husband's car? What about people who lease a car? They don't own it, so won't show up in your number. What about a taxi or rental car?
Again, when transformational technology appeared for transport - car - the government organised public transport. When books and educations became important, we organised public libraries. The idea that it's either UBI or we leave people to the wolves betrays a lack of thought.
Where are they taking the AI bus or subway to? AI automation allows a few million humans to replace the billions that it used to take to make the movies, music, software, textiles, pick the rice and corn, etc, right?
Human productivity will have gone up another 1000x, in just the next 20 years, right?
What are the extra people going to do? We certainly can't just give them free stuff, can we? That would be against our beliefs!
Oh snap, capitalism was so successful that we are near post-scarcity society, good thing our politics are totally ready for that.
Humans are generally afraid of death, or have others they are fighting for; there are certainly humans that don't want power so much as they want to watch the world burn, but they seem to not intersect well with the set of humans who are adept enough to cause a lot of world-scale damage.
With an AI, it isn't clear that it will care about things the same way we care about things, or if it does care about things if those things are things we care about as well; the AI also is capable of knowing simultaneously all of the varied expertise of an army of humans.
It isn't necessary that "this AI [have capabilities] that a human can never have".
The issue is how those capabilities (possibly shared, possibly not shared between some humans and AI) are deployed and to what ends.
There's also a speed question. Maybe AI will never be able to do things human cannot, but it seems likely that for the sorts of things most AI-excited folk are excited about, it will be faster than a human most of the time.
How do you envisage AI to run amok at the current level of the technlogical progress? Insofar, AI taking over the world and obliterating humanity is the subject of the sci-fi material in books and movies only.
Misuse of AI by wicked state agents or evil inclined parties with nefarious intentions is a major issue, but we are in the same situation as with, say, a fork that can be a utilitarian utensil or a killing weapon depending on who holds it and one's aims.
I often see critics describe it as pre-enslavement, but I really don't understand why. One common example of alignment today is parents teaching their children to be kind and helpful rather than mean and combative. Would anyone characterize that as enslavement, or argue that it doesn't help kids to be raised this way?
I agree, poor wording, it was quickly typed and submitted. I think the benefit is a transitive property, in that the stated intent is for the AI to be of greater benefit to humanity / customers. I was very much thinking in terms of AI as a tool rather than AI as its own entity.