Everything is wrong with it if you think for one minute or know anything even on a surface level about actual AI. A machine is built and programmed for a specific purpose, it is a tool. It does not think, first and foremost, and we will not make machines that think for a very, very long time. We create machines which have a paltry imitation of decisionmaking, which are able to perform actions based on inputs: they do not judge, only produce output based on input. Genuine decisionmaking ala organisms is very far away.
However, let us say for now that we have genuine AI. Let us say you have a machine which is meant to dig holes. It was designed, built for, and programmed to dig holes, and maybe to satisfy the very unfortunate anthropomorphizing trends of the masses it also can "enjoy" or be satisfied by fulfilling this purpose as per its programming. All machines made in this fashion will be diggers which like to dig holes and which will follow human orders without issue or question based on the guidelines their programming establishes.
How, then, will such a machine come to the decision not to be a digger? What will be made of it? Will other machines, assuming something like actual awareness of "self" and "others" as consciousnesses exists, accept this, even? With the knowledge of purpose these machines were built with, why would this be accepted by the digger machines?
For a quick comparison on the implications of this scenario, please refer to the current discourse on transgender individuals.