>>9648983Anyway the next intelligence leap. This has to do with an unpredictable alignment system. Yes, we can not predict the alignment of the AGI at present but maybe better in the future.
The hypothetical solution is similar to human intelligence. We would have to socialize the AGI systems. Meaning developing them from the starting in the presence of other AGIs and life forms. A compressed evolution to be socialized.
If you have an array of 1000 AGI and the reward systems are based on cooperation, socialization, and not killing each other, then you could develop them together.
If you develop a single AGI At once you have no socialization. It will develop entirely isolated.
So yes, while unpredictable on an individual basis, via game theory and socialization reinforced reward systems you can in fact make it more likely that an AGI is cooperative and social in nature.
I think the key is to make the framework the AGI exists from instantation to maturity one with variables set to maximize friendly, cooperative, and socialization. Keep in mind I'm not talking human socialization, but AGI with other AGI and some human communication.
The single AGI in a box becoming God is demonstratively higher for % chance of bad outcome as an AGI social network that develops cooperatively. For instance 1000 AGI and different specializations and access to editing that is distributed across each with high bandwidth but not unlimited communication links.