Wilken wrote:I'm not getting why scenario 6 is not the Frustrating world as well. Its the mars scenario all over again.
That scenario seems more likely to result in robotic overlords who don't care much for humans but leave them alone as long as they don't threaten the robots.
Wooloomooloo wrote:For me, the fundamental problem with Asimov's laws - as awesome as they are - was that they always ultimately and necessarily need a subjective interpretation on the part of whoever attempts to obey them. What "if you have an obstacle at less than 1m in front of you, avoid it" means is unequivocal and with appropriate sensors a factual determination can be made of it as being applicable or not; "this or that human is about to be harmed and you cannot remain inactive but must prevent it" is such a fuzzy concept as to be unworkable unless there's someone obviously about to be hit by a bus right in front of you or something. None of the laws can be judged in an objective way, it's entirely up to the judgement of the one making the call when they apply and when they don't, or what exactly an appropriate action might be. Once you're not in immediate physical danger, what exactly would it take to preserve yourself the best way? What would constitute a threat, if it's not about immediate annihilation? So yeah... nice try, but...
And that is why the three laws provided enough material for several books. Asimov found that all the robot stories he read were slight variations on Frankenstein: Someone built a robot, and the robot ran amok and destroyed its creator. Every time. Asimov found that preposterous. Of course we'd put safety mechanisms in the robots. The safety mechanisms would not be perfect, but they'd be as good as we could make them. So Asimov designed a set of safety mechanisms, and then explored all the different ways they might fail that he could think of.
peregrine_crow wrote:My main problem with the three laws is that they are written in English and (because of that) assume that we have a complete definition of a whole bunch of concepts that are extremely ambiguous even in the best of cases. To implement the first law, you would have to unambiguously define (at the very least) "human" and "harm", which means outright solving a large part of philosophy.
As Quey noted, the laws weren't written in English. They were hardwired in the design of the positronic brain. None the less they were open to interpretation, and some of Asimov's stories do explore the problems of defining "human" and "harm".