What is the difference that explains why this doesn't seem to happen for humans in the ceo role, but would happen if the role were fulfilled by automation?
I'm not sure you can get around the principal-agent problem that easily. Who sets the policy levers on the automation and governs it? They inherit the ceo's negotiating leverage with shareholders.
It seems like you'd need some sort of fairly radical control structure (say, no board, just ai interacting directly with shareholders) to get around this. But even this ignores that the automation is not neutral, it is provided by actors with incentives.