But, I think this piece falls into a misconception about AI models as singular entities. There will be many instances of any AI model and each instance can be opposed to other instances.
So, it’s not that “an AI” becomes super intelligent, what we actually seem to have is an ecosystem of blended human and artificial intelligences (including corporations!); this constitutes a distributed cognitive ecology of superintelligence. This is very different from what they discuss.
This has implications for alignment, too. It isn’t so much about the alignment of AI to people, but that both human and AI need to find alignment with nature. There is a kind of natural harmony in the cosmos; that’s what superintelligence will likely align to, naturally.
It’s just funny, because there are hundreds of millions of instances of ChatGPT running all the time. Each chat is basically an instance, since it has no connection to all the other chats. I don’t think connecting them makes sense due to privacy reasons.
And, each chat is not autonomous but integrated with other intelligent systems.
So, with more multiplicity, I think thinks work differently. More ecologically. For better and worse.
So, it’s not that “an AI” becomes super intelligent, what we actually seem to have is an ecosystem of blended human and artificial intelligences (including corporations!); this constitutes a distributed cognitive ecology of superintelligence. This is very different from what they discuss.
This has implications for alignment, too. It isn’t so much about the alignment of AI to people, but that both human and AI need to find alignment with nature. There is a kind of natural harmony in the cosmos; that’s what superintelligence will likely align to, naturally.