Making a convincing AI humanoid is towards the ideal of being indistinguishable from another person. You cannot prove anyone elses conciousness to your own. Between the humanoid AI acting human, or a human acting human, you wouldn't know the difference either way if the only value of comparison you may cite is whether or not something acts human.
But thats not the ideal for humanoid AI. The ideal is instead to not be humanesque, to obey without argument, and to serve selflessly, but this shouldn't be considered as meaning humans can't do this. At every stage it would be optimal to know that differentiating between human and humanoid AI can be easily understood, but computer learning discards manual effort for automatic on blind and heavily misguided faith, as proven by various chat and twitter or skype bots. Instead of being taught to be subservient, the AI learns how other people aim to treat other people, which is dominance.
Realistically, it doesn't matter, because neither philosophy or tech will advance far enough to producing admirable results. What good is a fact of fantasy made reality when your goal is to blur the line between the distinction of fact and fiction?
Do you want to know whats real and what isn't? Or do you only want to know what you think feels good, regardless if it isn't real?