In a dictatorship with an AI being in control, I don’t think there’s a question of accepting consequences at they very least.
There is no such thing as best case scenario objectively, so it’s always going to be a question of what goals the AI has, whether it’s given them or arrives at them on its own.
That would require the humans controlling the experiment to both be willing to input altruistic goals AND accept the consequences that get us there.
We can’t even surrender a drop of individualism and accept that trains are the way we should travel non-trivial distances.
In a dictatorship with an AI being in control, I don’t think there’s a question of accepting consequences at they very least.
There is no such thing as best case scenario objectively, so it’s always going to be a question of what goals the AI has, whether it’s given them or arrives at them on its own.