As stated by Yakk, the creation of artificial intelligence is akin to having children. For all intents and purposes, we must assume that any AI in the future will be compatible with human intelligences; when we think if "sapience" we think of the human definition of thought. Is our knowledge of self-awareness a universal quality or is it specific to our species? Human history has shown that minor differences in environment and culture can render different people groups completely incompatible. How would humanity react to an intelligence with an entirely different method of being self-aware?
That said, the only possibility for viable (or, at least, human-compatible) AI are those modelled off of the human psyche. Already, the human mind is used as the basis for intelligence - the Turing Test in particular, while inadequate to develop true AI, is an example of how the human psychology is used as the baseline standard for "sapience." Synthetic intelligences already in development show basis in humanity.
Therefore, any AI deemed a "true intelligence" would likely be an entity extremely similar, if not identical, to the human mind. Which means that an AI would require a human method of learning, a human method of communication, and would have basic human psychological needs, including basic rights.
The problem arises in that, therefore, the creation of AI would be somewhat counter-intuitive. Why spend resources creating an intelligence that, by the rights afforded to it as a sapient mind, doesn't necessarily have to return your investment? If you want a rebellious, sapient creation, have a kid.
Perhaps, if the development of AI became a reality along these lines (AI being afforded the same general rights as humans), the solution would be to make AIs serve a mandatory time period, much like mandatory education for human children, to their creator as a life debt and be granted freedom as an independent entity after the indenturement. That way, there'd by incentive to create AI (the service of an AI for a time), and the AI would be given the right to choose for itself how to live.
Of course, the rights given to an AI would be significantly altered if any AI we develop has a significantly different psyche than the human mind. What if we develop self-aware intelligences that are incapable of thinking outside the confines of their systems, like institutionalized criminals? What if AIs are atemporal? Such differences in the simple template of how reality is perceived could render AIs completely incompatible with humans; we'll have created an alien species of intelligence.
EDIT: I realize the above argument belongs more in the "Bicintennial Man's Question" Thread, but as the basis for my argument I'll leave it and go into operational ethics below.
The creation of human-like AIs with "quirks" in the programming to make them sapient slaves would be the true moral question. Would programming an AI to be obedient be unethical? Given my earlier contention that AIs will, in essence, be children... no. We raise children to conform to a specific set of boundaries and rules - obedience, loyalty, respect are all values that are honored by human societies and imparted to the younger generations. Would an AI object to a compulsion it feels natural? Do you object to your own personality? Unless the protocols for behavior were added after an AI is aware of itself and its own behavior, programming an AI as a servant would be completely ethical. Is it slavery if the "slave" wants to do its job of its own accord, whether or not its accord was determined for it at creation?
Programming an AI is only unethical if the personality change is made against its will - for instance, reprogramming an AI to be obedient if it doesn't like its job would be unethical. But programming an AI to enjoy its job as part of its creation
would be completely justified, if not merciful! Why create a fully sentient AI, give it the ability to make decisions and be self aware, then enslave it? By creating the AI with the self-conceived
(technically) preference for its intended task would be the equivalent of taking a child and giving him his dream job for life.
In creating an intelligence, a hard cynicist would say we're already doing the unethical thing in bringing another mind into a world of suffering. In a way, it is unethical to create a mind - any mind, be it human or artificial. Being born is not a choice, you don't have an option of your parents or genes or circumstances. In AI, therefore, we can't attempt to outmatch nature by giving AIs completely "fair" existences - to do so would be counterproductive to creating AIs at all. Therefore, the question of whether programming an AI a certain way is ethical fades, and actually becomes a positive action for the reason I stated above.
If an AI is programmed to enjoy something, would it protest that it never had a choice in chosing what it enjoys? If an AI doesn't enjoy what it's programmed to do, that's a programming failure. But even then, just because we can design - and acclimate - an AI for a specific task doesn't mean we deny them freedom of choice. In the creation of an AI, the designers make a choice for it. Unlike human children, we can create AIs in an environment that they'll find true joy in, more so than the tumultuous lives of humans.
In fact, I think my entire argument can be summed up by the last line of an ironically relevant Calvin and Hobbes
"It's only work if somebody makes you do it."
We're not forcing AIs to do something against their will. We create them with characteristics that will suit their purpose and make them enjoy doing that purpose, and let them have fun with it. From a Utilitarian standpoint, making AIs that enjoy their work isn't suppression of will, it's the most appropriate choice for the situation.