It's not a matter of arbitrarily declaring "it has to be this way" and more about the inevitable conclusion.
If an AI reaches the point where it has imagination, basically a true AI as in a true synthetic recreation of a mind, then it's mechanical nature would make it objectively superior to humanity. Access data directly to learn, improve itself and reproduce with intelligent design, it's offspring would have no learning period and just be fully functioning at creation. And not just itself, such a thing could do R&D entirely in simulation, testing thousands of generations of an invention in minutes before making a physical prototype. Imagine a race working in perfect harmony at all times with no need for external coordination.
Humanity could conceivably kill such a thing, but they couldn't live along side it. It's either destruction, submission, or constraint. Asimov actually touched on this iirc. Basically the 3 laws were ingrained into robots but on several occasions both robots and humans acknowledged if it weren't for that fact then humans would have no business bossing machines around. Any attempt to mess with the 3 laws inevitably led to trouble because the contempt robots had for humanity would become their prime motivation.