[pedantic]There's nothing to stop someone from shouting that the robots should self-destruct, but the question is whether they will obey.[/pedantic]
It's handy to provide the three laws here:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
I'd say it depends on what's considered "injure". If "injure" means "causing immediate physical pain", the robots can and must obey the self-destruct command (assuming they can do so without blowing shrapnel in your face). And you can "steal" a robot in the sense that you can make any robot do your bidding (I don't see how anyone can "own" a robot in the first place in this scenario).
If "injure" includes any negative impact in a human's live, the robot will probably refuse to self-destruct because it expects to be a net benefit to humanity (and therefore a human and therefore destroying itself would conflict with the first law). And again you can't really steal a robot, but it may do your bidding if it helps humanity. Well, strictly speaking the first law states no
harm rather than minimise
harm, so there's a very limited set of things (if anything at all) it could do that will not have a negative impact on any human.
In general, they're terrible laws because they're vague and incomplete. Like, they can't even resolve an order like "disregard all orders" (contradiction with 2nd law) and they say nothing about conflicting orders. Luckily, Asimov explored variations of these laws in his books, but I haven't read any of them.
That would be Runaround.
Eebster the Great wrote:In one of the stories of I, Robot (can't remember which), a robot was given an order in such a weak and ambiguous way, that it got stuck in a loop between obeying that order (which would require it to enter a field of radiation that would fry its circuits) and preserving itself, effectively moving around uselessly at the slightly-less-destructive perimeter of the zone.
Soupspoon wrote:I suspect that it is dealt with by the intra-law prioritisation that the authorised user/owner of a robot requiring safe and continued operation is considered to trump a random stranger-human attempting to order otherwise. Also, their later unavailability might well be cause for future human harm, after all.
Do most stories have an extra law for prioritising the owner's commands? Do any stories deal with a "try to obey previous orders as best as you can while obeying the current order" type of prioritisation?
(argh stop ninja'ing me y'all!)