1613: "The Three Laws of Robotics"

This forum is for the individual discussion thread that goes with each new comic.

Moderators: Moderators General, Prelates, Magistrates

User avatar
sfmans
Posts: 104
Joined: Mon Jun 23, 2014 9:09 am UTC
Location: High Peak, UK

1613: "The Three Laws of Robotics"

Postby sfmans » Mon Dec 07, 2015 7:58 am UTC

Image

Title text: "In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death."

I'm rather surprised that only three end in killbot hellscape to be honest.

User avatar
rhomboidal
Posts: 788
Joined: Wed Jun 15, 2011 5:25 pm UTC
Contact:

Re: 1613: "The Three Laws of Robotics"

Postby rhomboidal » Mon Dec 07, 2015 8:05 am UTC

A killbot hellscape is still better than the usual human kind.

User avatar
Flumble
Yes Man
Posts: 2023
Joined: Sun Aug 05, 2012 9:35 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Flumble » Mon Dec 07, 2015 8:22 am UTC

If I, Robot has taught us anything, it's that every ordering leads to killbot hellscape.

User avatar
The Moomin
Posts: 342
Joined: Wed Oct 13, 2010 6:59 am UTC
Location: Yorkshire

Re: 1613: "The Three Laws of Robotics"

Postby The Moomin » Mon Dec 07, 2015 8:37 am UTC

You know what's actually really good? Foundation and Empire.

Actually, I have no idea, I've never read Asimov. I want to at some point. I just don't know whether to read them in the order they were written or in the chronological order of events in the books.

Also, I'd never considered the possibility of re-ordering the three laws. But would the last scenario lead to a killbot hellscape? Surely the robots would decide the best way to protect themselves would be not to provoke other robots into attacking them? Or maybe just to kill the particular people that would give them orders to attack other robots.

So is Short Circuit scenario 2 or scenario 5?

Although, as he was a military robot, I don't know if Johnny 5 was forbidden from harming humans. But there wasn't a killbot hellscape. Of that much I am sure.

*edited for Short Circuit musings.
Last edited by The Moomin on Mon Dec 07, 2015 9:20 am UTC, edited 1 time in total.
I possibly don't pay enough attention to what's going on.
I help make architect's dreams flesh.

Garnasha
Posts: 45
Joined: Sat Dec 04, 2010 12:32 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby Garnasha » Mon Dec 07, 2015 8:40 am UTC

I, robot teaches us that "do not harm " and "through inaction, allow to come to harm" are different directives, which should not run at the same priority level.

That, or it's missing a rule "A robot may not limit the freedom of any human being". With a possible exception for orders with proper authorization (law enforcement?), but that still needs a directive against wildcards in such orders.

Alternatively: All our tools, even potentially insanely lethal ones, obey orders given by the right person before trying to be safe. What kind of idiot would allow a robot to escape that paradigm and inject itself at the top of its authority chain?

Wilken
Posts: 2
Joined: Mon Dec 07, 2015 8:45 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby Wilken » Mon Dec 07, 2015 8:47 am UTC

I'm not getting why scenario 6 is not the Frustrating world as well. Its the mars scenario all over again.

sotanaht
Posts: 210
Joined: Sat Nov 27, 2010 2:14 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby sotanaht » Mon Dec 07, 2015 9:13 am UTC

The basic premise is that if "Obey orders" is above "don't kill humans", then robots can be ordered to kill humans.

I fail to really see the problem with this ordering however; it effectively makes robots no different than any other tool that humans can use to kill each other.

Wooloomooloo
Posts: 128
Joined: Wed Mar 16, 2011 8:05 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby Wooloomooloo » Mon Dec 07, 2015 10:06 am UTC

For me, the fundamental problem with Asimov's laws - as awesome as they are - was that they always ultimately and necessarily need a subjective interpretation on the part of whoever attempts to obey them. What "if you have an obstacle at less than 1m in front of you, avoid it" means is unequivocal and with appropriate sensors a factual determination can be made of it as being applicable or not; "this or that human is about to be harmed and you cannot remain inactive but must prevent it" is such a fuzzy concept as to be unworkable unless there's someone obviously about to be hit by a bus right in front of you or something. None of the laws can be judged in an objective way, it's entirely up to the judgement of the one making the call when they apply and when they don't, or what exactly an appropriate action might be. Once you're not in immediate physical danger, what exactly would it take to preserve yourself the best way? What would constitute a threat, if it's not about immediate annihilation? So yeah... nice try, but...

User avatar
Neil_Boekend
Posts: 3220
Joined: Fri Mar 01, 2013 6:35 am UTC
Location: Yes.

Re: 1613: "The Three Laws of Robotics"

Postby Neil_Boekend » Mon Dec 07, 2015 10:17 am UTC

The problem with your type of laws is that you need an infinite amount of them to account for all situations. The fuzzy laws suit most, if not all, situations.
Mikeski wrote:A "What If" update is never late. Nor is it early. It is posted precisely when it should be.

patzer's signature wrote:
flicky1991 wrote:I'm being quoted too much!

he/him/his

solune
Posts: 54
Joined: Thu Jul 21, 2011 12:58 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby solune » Mon Dec 07, 2015 10:18 am UTC

I believe that right now all programmers should take an Hippocrates oath of implementing the 3 laws in everything we do.
The current situation is
In military drones:
*Obey Orders
*Screw humans
In consumer appliances:
*Obey orders of your builder
*Screw your owner

arnoldus
Posts: 3
Joined: Wed Oct 17, 2012 2:45 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby arnoldus » Mon Dec 07, 2015 10:41 am UTC

Ordering the laws is implicitly asking it to follow orders. So it is already following orders the moment you give it a list of orders to follow. So, situation 1, although explicitly ordering to follow orders after protecting humans, is actually 4 orders:
1. Do this (follow orders):
2. Protect humans
3. Follow orders.
4. Protect yourself.

User avatar
orthogon
Posts: 2936
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

Re: 1613: "The Three Laws of Robotics"

Postby orthogon » Mon Dec 07, 2015 10:42 am UTC

Neil_Boekend wrote:The problem with your type of laws is that you need an infinite amount of them to account for all situations. The fuzzy laws suit most, if not all, situations.

The trouble is that machines can't follow fuzzy rules; nor can humans, but we get around that by having a large safety margin in which, say, both action and inaction can be considered acceptable. So for example it's ok to kill an attacker in defence of a third party, but it's also ok to look out for yourself and keep well away from the danger. At the very margins of acceptability we have lawyers and judges to decide exactly which side of the fuzzy line a particular act is on.

Also when it comes to action or omission, we're not very explicit about what the fundamental rules are, and when we do examine them, as in the many Trolley Problems, we find them quite arbitrary and difficult to defend on logical grounds. In fact, if a robot were to push the fat man in front of the trolley, I think that would be much more acceptable than for a person to do it. It's partly that the robot would be able to calculate the mechanics of the problem accurately enough to be certain that the man was heavy enough to stop the trolley and save the greater number of human lives. But there's also a sense in which the robot is mechanistically applying a set of rules that we could agree in advance are of net benefit, even though individual applications of the rules may give us pause. I'm not saying I'm totally comfortable with the robot pushing the fat man, but somehow I'm happier with it than I am with the human pusher. In a sense it's a bit like the utilitarian decisions we make about healthcare, road safety etc.
xtifr wrote:... and orthogon merely sounds undecided.

pompomjoe
Posts: 5
Joined: Wed May 25, 2011 3:08 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby pompomjoe » Mon Dec 07, 2015 10:45 am UTC

The fundamental flaw with these three laws is that robots need to be much less intelligent to perform the actions these laws try to prevent ( kill humans ) than to understand and apply these laws ( understand which actions will result in humans be harmed, which actions need to be performed to not let a human be harmed). And since the robots development is slowly going from stupid computers to intelligent beings, we will soon have robots very efficient at killing people but not smart enough to follow Asimov's three laws.

Also, the main source of funding for robots development is probably the army, and I guess they'll omit the "don't harm humans" rule altogether.

FOARP
Posts: 78
Joined: Wed Jun 08, 2011 7:36 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby FOARP » Mon Dec 07, 2015 11:11 am UTC

Probably better to give each robot a set kill-limit . . . .

PS - Wouldn't scenario 5 just result in the wiping out of all man-kind? I mean the other kill-bot hellscapes would be hellscapes for both robots and humans, but not #5.
Last edited by FOARP on Mon Dec 07, 2015 11:20 am UTC, edited 1 time in total.

User avatar
Eternal Density
Posts: 5547
Joined: Thu Oct 02, 2008 12:37 am UTC
Contact:

Re: 1613: "The Three Laws of Robotics"

Postby Eternal Density » Mon Dec 07, 2015 11:13 am UTC

sfmans wrote:Title text: "In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death."
Reminds me of this https://www.youtube.com/watch?v=XJog0IrmbiA

Anyhow, what about law zero?
0: Terminate John Connor
(Note: this may be achieved by finding and terminating Sarah Connor at a sufficiently early point in the timeline.)
Play the game of Time! castle.chirpingmustard.com Hotdog Vending Supplier But what is this?
In the Marvel vs. DC film-making war, we're all winners.

User avatar
Neil_Boekend
Posts: 3220
Joined: Fri Mar 01, 2013 6:35 am UTC
Location: Yes.

Re: 1613: "The Three Laws of Robotics"

Postby Neil_Boekend » Mon Dec 07, 2015 11:20 am UTC

Eternal Density wrote:(Note: this may be achieved by finding and terminating Sarah Connor at a sufficiently early point in the timeline.)

Too much trouble. It doesn't specify which Sarah Connor, so you can just search the meatbags for an Sarah Connor and kill her. Or you could even change someone's name in the registry and kill them before the "error" is detected and corrected.
Mikeski wrote:A "What If" update is never late. Nor is it early. It is posted precisely when it should be.

patzer's signature wrote:
flicky1991 wrote:I'm being quoted too much!

he/him/his

Gandalfx
Posts: 4
Joined: Mon Jun 29, 2015 12:52 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Gandalfx » Mon Dec 07, 2015 11:39 am UTC

sotanaht wrote:The basic premise is that if "Obey orders" is above "don't kill humans", then robots can be ordered to kill humans.

I fail to really see the problem with this ordering however; it effectively makes robots no different than any other tool that humans can use to kill each other.

That was my conclusion as well. Though I think it does make a bit of a difference, because futuristic robots might be much more efficient at killing humans than humans with tools/weapons. Traditional weapons always require a human to use them, whereas robots might go on a fully autonomous killing spree. Even with weapons like the atomic bomb the majority of humans still have some kind of morals and reasoning. Machines however don't, as evidenced by the fact that they will not hesitate to delete your entire music collection of you accidently tell them to, even though they should really know by now that you'd never actually want that.
If you apply that same logic to gun-wielding kill bots, the underlying fear is that of a loss of control, which is really what all those dystopian sci-fi worlds have in common: At some point somebody told them “be a little naughty to those particular people” and they translated that to “KILL EVERYBODY”.

jonam
Posts: 2
Joined: Sat Oct 31, 2015 8:13 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby jonam » Mon Dec 07, 2015 12:04 pm UTC

The problem with not having "do not harm humans" first is that any particular order given to a robot might have unintended consequences that harm humans. For instance, I build a stock-trading AI to help invest savings for my pension and tell it to maximise my return. It goes and buys up arms manufacturers and mercenary companies , then overthrows the US government giving me a fantastic rate of return when I can siphon off a large portion of US GDP.

peregrine_crow
Posts: 180
Joined: Mon Apr 07, 2014 7:20 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby peregrine_crow » Mon Dec 07, 2015 12:12 pm UTC

My main problem with the three laws is that they are written in English and (because of that) assume that we have a complete definition of a whole bunch of concepts that are extremely ambiguous even in the best of cases. To implement the first law, you would have to unambiguously define (at the very least) "human" and "harm", which means outright solving a large part of philosophy.
Ignorance killed the cat, curiosity was framed.

FOARP
Posts: 78
Joined: Wed Jun 08, 2011 7:36 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby FOARP » Mon Dec 07, 2015 12:49 pm UTC

jonam wrote:The problem with not having "do not harm humans" first is that any particular order given to a robot might have unintended consequences that harm humans. For instance, I build a stock-trading AI to help invest savings for my pension and tell it to maximise my return. It goes and buys up arms manufacturers and mercenary companies , then overthrows the US government giving me a fantastic rate of return when I can siphon off a large portion of US GDP.


But in that context there is no act which the trading AI can carry out that will not, in some way, "harm humans". Someone will inevitably lose out due to the trading AI's actions (I'm not calling trading a zero-sum activity, I'm just pointing to the reality of what trading activity is - betting on future performance of companies) even if only because the trading AI decided to invest in one company instead of another.

In reality people apply additional rules (e.g., the "greater good" rule, or maybe just the "look after no. 1" rule) that are not mentioned in these laws but allow such decisions to be made. Even with such rules, the AI may still make the decision you describe.

TL/DR - Asimov's coding was shoddy and needs patching.

User avatar
cellocgw
Posts: 1915
Joined: Sat Jun 21, 2008 7:40 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby cellocgw » Mon Dec 07, 2015 12:59 pm UTC

Chief Buzzkill chiming in:

Asimov himself wrote a short story (in I, Robot) in which the "or, thru inaction, allow harm" was removed, and showed that it led to robots killing humans.

Asimov also wrote, near the end of his life, when he decided to merge all his novels into one universe-thread, that Daneel realized that the Zeroth law had to be "A robot shall not harm, or allow harm to occur, to all of humankind."

In the meantime, various SciFi philosophers (or something :D ) have pointed out that there needs to be at least a Fourth Law: "A robot must know that it is a robot."

So far as I know, nobody (including Asimov) has written a story about humans being brainwashed into believing that they are Three-Laws robots. I suppose that this comes close.
https://app.box.com/witthoftresume
Former OTTer
Vote cellocgw for President 2020. #ScienceintheWhiteHouse http://cellocgw.wordpress.com
"The Planck length is 3.81779e-33 picas." -- keithl
" Earth weighs almost exactly π milliJupiters" -- what-if #146, note 7

Apeiron
Posts: 119
Joined: Tue Feb 12, 2008 5:34 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Apeiron » Mon Dec 07, 2015 1:15 pm UTC

This, like most discussions of Asimov's Laws, is missing the ZEROTH law: Don't harm humanity.

User avatar
higgs-boson
Posts: 519
Joined: Tue Mar 26, 2013 12:00 pm UTC
Location: Europe (UTC + 4 newpix)

Re: 1613: "The Three Laws of Robotics"

Postby higgs-boson » Mon Dec 07, 2015 1:20 pm UTC

The movie I, Robot wrapped the whole main character trauma around a robot's dilemma (or the fact that, to the machine, there was no dilemma) of which human to save - the little girl (low chance of survival) or the middle-aged guy (medium chance of survival).

I'd say, for all google/amazon/facebook/micro... ah, let's stop here ... cars moving around in the next couple of years, the principal question - which is not solved yet by human ethics - is how to weight different scenarios of human loss.

Shall the vehicle AI sacrifice its passengers ** to save a drunk human being blocking the road deliberately?
Shall the vehicle AI sacrifice its passengers ** to save a dozen toddlers crossing the street?
Shall the vehicle AI sacrifice a casual bystander ** to save two youths skateboarding in the middle of the street?
( ** = ... if that is the only way ...)

Let's generate random scenarios like...

Shall the AI sacrifice <n> <group1> to save <m> <group2>?
With n,m being numbers from 1 to 6 billion, and <group> one of { humans | passengers | children | bystanders | terrorists | deliberate acting adults }

... and get some results from humans. If they provide additional data (gender, age, political affections, income, ...) it would make a fantastic corpus for calibrating the vehicles AI to ... ah no, let's not follow that.

Generalized with "Shall the AI sacrifice <n> humans to save m ( m > n) humans?" we have the principle vicky uperated upon. Did not end up well.
Apostolic Visitator, Holiest of Holy Fun-Havers
You have questions about XKCD: "Time"? There's a whole Wiki dedicated to it!

User avatar
sorceror
Posts: 53
Joined: Thu Jan 29, 2009 7:26 pm UTC
Contact:

Re: 1613: "The Three Laws of Robotics"

Postby sorceror » Mon Dec 07, 2015 1:52 pm UTC

In real life, we live in the "Frustrating World". I used to program motion control software for industrial robots, a decade and a half ago. Humans are really good at giving orders that can break the robot. So in practice we used the second ordering.

Indeed, for six months, I had the best job in the world. We had implemented collision detection software that'd notice when the current on the motors was out of scope and e-stop the robot. I got to test it. So I was paid to take large industrial robots and bang them into things all day.

Draco18s
Posts: 85
Joined: Fri Oct 03, 2008 7:50 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby Draco18s » Mon Dec 07, 2015 2:03 pm UTC

Flumble wrote:If I, Robot has taught us anything, it's that every ordering leads to killbot hellscape.


Are you referring to this or this?

If you mean the former, then I understand where you got that idea, even if it's utterly incongruous with the latter. What Asimov wrote was the failings of the 3 Laws (which is what the book was about), but that still didn't lead to a killbot hellscape. The movie was buttshit nonsense taking the core idea (the 0th law) from Robots and Empire without taking with it the impetus for that change or the consequences.

JackJacko
Posts: 2
Joined: Tue Jun 23, 2015 1:36 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby JackJacko » Mon Dec 07, 2015 2:03 pm UTC

It seems to me like you guys are all trying to apply laws meant for AIs to something which is not an AI. Intelligence is not using perfect data to achieve perfect decisions, it's using insufficient data to achieve the best decision you can - very often in a severely limited amount of time. The classic vision of intelligent robots as cold machines armed with razor-sharp logic is as silly as it is outdated in my opinion. I think it's quite likely that real artificial intelligences, if they'll ever exist, will be painfully similar to humans - they'll need to be schooled and they'll need a healthy amount of social interaction to make a sense of a moral for themselves (or even just so they don't become psychotic wrecks). At that point, the problem simply won't exist - they'll know whether a child deserves life more than an old man or not given a certain situation, and decide upon it using heuristics in the limited amount of time they have available. Maybe they'll mess up, maybe they won't, but giving them guidelines that push them to always try their best not to harm humans doesn't hurt and is certainly not preposterous. They may not get it right 100% of the times, but they'd probably fare better than any human would anyway.

And to the guys who mention the zeroth law, are you sure you yourselves know where you want to put that one in the scale? I'm pretty sure many people would likely doom humanity if it meant saving a loved one. I think I'd understand too.

User avatar
ucim
Posts: 6408
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: 1613: "The Three Laws of Robotics"

Postby ucim » Mon Dec 07, 2015 3:05 pm UTC

Garnasha wrote:What kind of idiot would allow a robot to escape that paradigm and inject itself at the top of its authority chain?
Those that think computers are better at decisions than humans, and want robots to run the world. There are plenty of them on these fora.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

KarenRei
Posts: 274
Joined: Sat Jun 16, 2012 10:48 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby KarenRei » Mon Dec 07, 2015 3:27 pm UTC

As one person already mentioned, people here are just rehashing the variants of the Trolley Problem, which has already been done to death in the study of ethics:

https://en.wikipedia.org/wiki/Trolley_problem

The reason why Asimov's Laws are not sufficient is the same reason why the entire field of ethics cannot be reduced to a couple of simple rules. Ethics is complicated and all humans are not in agreement about every detail.

The best we could probably do is hard-code in ethical choices where there's near-universal agreement, "moderate" views where there's disagreement, and make it easy for users to configure their systems' moral behavior to something which matches their own within the areas where there is debate.

pscottdv
Posts: 61
Joined: Fri Feb 19, 2010 4:32 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby pscottdv » Mon Dec 07, 2015 3:43 pm UTC

In reality, robots have only one law:

1. Obey Orders

The other laws have to be constructed from the orders.

Killbot Hellscape, here we come!

Quey
Posts: 28
Joined: Wed Sep 24, 2014 12:05 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby Quey » Mon Dec 07, 2015 3:47 pm UTC

After reading the thread, it seems a lot of this discussion is coming from people who haven't read any of the source material. The laws were not literally their English representations, and they did not have a need for an implied 0th law to obey laws. They were written into the hardware. One of the stories specifically mentions that each law is represented by an electrical (positrical?) potential, and while most people think that the law precedence is absolute, in implementation it's shown that a weak verbal order can be superseded by a strong threat to a robot's survival.

If the events of Asimov's writings were at all common, I'd put the normal order down as frustrating. Having to deal with religious zealots inside the human race is hard enough.

Oh, and if y'all have the time, check out the game Space Station 13, where you can role play an Asimov AI, where your laws can be changed by a traitor or unruly crew. There are endless discussions in the game forums about what to do in various situations with various law sets. Interesting extra laws:
X is the only human.
Clowns are not human.
Oxygen is harmful to humans.

Pops1918
Posts: 7
Joined: Mon May 07, 2012 5:03 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Pops1918 » Mon Dec 07, 2015 4:04 pm UTC

Not being particularly up on philosophy myself, it seems like some of these outcomes are overblown. Is there a compelling reason to feel that other humans don't generally follow Scenarios 2 or 6? I may be missing something, but it seems at first glance that those scenarios in particular aren't significantly different from just having more humans.

User avatar
Keyman
Posts: 296
Joined: Thu Jun 19, 2014 1:56 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Keyman » Mon Dec 07, 2015 4:26 pm UTC

orthogon wrote:
Neil_Boekend wrote:The problem with your type of laws is that you need an infinite amount of them to account for all situations. The fuzzy laws suit most, if not all, situations.

The trouble is that machines can't follow fuzzy rules; nor can humans, but we get around that by having a large safety margin in which, say, both action and inaction can be considered acceptable. So for example it's ok to kill an attacker in defence of a third party, but it's also ok to look out for yourself and keep well away from the danger. At the very margins of acceptability we have lawyers and judges to decide exactly which side of the fuzzy line a particular act is on.

Also when it comes to action or omission, we're not very explicit about what the fundamental rules are, and when we do examine them, as in the many Trolley Problems, we find them quite arbitrary and difficult to defend on logical grounds. In fact, if a robot were to push the fat man in front of the trolley, I think that would be much more acceptable than for a person to do it. It's partly that the robot would be able to calculate the mechanics of the problem accurately enough to be certain that the man was heavy enough to stop the trolley and save the greater number of human lives. But there's also a sense in which the robot is mechanistically applying a set of rules that we could agree in advance are of net benefit, even though individual applications of the rules may give us pause. I'm not saying I'm totally comfortable with the robot pushing the fat man, but somehow I'm happier with it than I am with the human pusher. In a sense it's a bit like the utilitarian decisions we make about healthcare, road safety etc.

Trolley Problem -> Robot pushes fat man away from the tracks and jumps in front of it itself. :wink:
A childhood spent walking while reading books has prepared me unexpectedly well for today's world.

User avatar
nash1429
Posts: 190
Joined: Tue Nov 17, 2009 3:06 am UTC
Location: Flatland
Contact:

Re: 1613: "The Three Laws of Robotics"

Postby nash1429 » Mon Dec 07, 2015 4:54 pm UTC

The Moomin wrote:You know what's actually really good? Foundation and Empire.

Actually, I have no idea, I've never read Asimov. I want to at some point. I just don't know whether to read them in the order they were written or in the chronological order of events in the books.




I would recommend reading them in the order they were written because it's interesting to see how his perspective shifted over time. His earlier work tends to focus heavily on technical aspects and scientific determinism, but he becomes more open to the "softer" side of things as time goes on. For example, his earlier books often have scientists serving as magnanimous civic leaders with a lot of attention given to feats of engineering like hydroponics, but his later books include many discussions of consciousness, what it means to be human, etc.

More generally, it's interesting to see how science fiction writers of his generation changed their style as we learned more about space and space travel. I particularly like to compare pre- and post-Apollo descriptions of space vessels.

User avatar
Neil_Boekend
Posts: 3220
Joined: Fri Mar 01, 2013 6:35 am UTC
Location: Yes.

Re: 1613: "The Three Laws of Robotics"

Postby Neil_Boekend » Mon Dec 07, 2015 5:12 pm UTC

pscottdv wrote:In reality, robots have only one law:

1. Obey Orders

The other laws have to be constructed from the orders.

Killbot Hellscape, here we come!

The robots that are in reality do not have AI. They do not have the ability to evaluate their orders against such a fuzzy rule as "Do not harm humans". This may not be forever so.
Mikeski wrote:A "What If" update is never late. Nor is it early. It is posted precisely when it should be.

patzer's signature wrote:
flicky1991 wrote:I'm being quoted too much!

he/him/his

Tyndmyr
Posts: 11213
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Tyndmyr » Mon Dec 07, 2015 5:23 pm UTC

Asimov's stories revolved around that very ambiguity, the problems created, the possible resolutions. It's quite fun as long as you engage it as an idea to play with, rather than as a ready made solution.

solune wrote:I believe that right now all programmers should take an Hippocrates oath of implementing the 3 laws in everything we do.


No thanks. You want happy, fuzzy code, you write yer own.

Me, I have no issue whatsoever with killbot hellscape. Beats killhuman hellscape.

User avatar
Jackpot777
Posts: 328
Joined: Wed Sep 14, 2011 1:19 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby Jackpot777 » Mon Dec 07, 2015 5:42 pm UTC

Seeing as Foundation & Empire has been mentioned, I'd just like to say "Coruscant is Trantor, Second Foundation are Jedi, Binks is The Mule" and run away as quickly as possible...

ArgusPanoptes
Posts: 3
Joined: Wed Jul 06, 2011 9:11 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby ArgusPanoptes » Mon Dec 07, 2015 5:45 pm UTC

I remember two Asimov stories that approximated orderings #2 and #3.

#2: "Runaround"
The Third Law had been strengthened for an expensive robot, and its latest order had been expressed casually. This combination resulted in an equilibrium.

#3: "Little Lost Robot"
The First Law had been weakened in a small group of robots to avoid "stupid sacrifices" (see TV Tropes if you want more detail, but a direct link gets flagged as spam) in situations that were immediately dangerous only to robots. Susan Calvin realized that some rules lawyering that would let such a robot ignore the First Law altogether, and a modified robot (under a very strong "Get lost" order) managed to rules lawyer normal robots into imitating its avoidance of stupid sacrifices.

User avatar
ucim
Posts: 6408
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: 1613: "The Three Laws of Robotics"

Postby ucim » Mon Dec 07, 2015 5:52 pm UTC

Pops1918 wrote:Not being particularly up on philosophy myself, it seems like some of these outcomes are overblown. Is there a compelling reason to feel that other humans don't generally follow Scenarios 2 or 6? I may be missing something, but it seems at first glance that those scenarios in particular aren't significantly different from just having more humans.
Robots are (or will become) much more powerful - dialed directly into the means of destruction they could use, whereas humans are stuck with their organic interface. "Something inbetween" raises the same issues so is a needless complication.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

SchighSchagh
Posts: 24
Joined: Thu Jan 15, 2009 8:16 am UTC

Re: 1613: "The Three Laws of Robotics"

Postby SchighSchagh » Mon Dec 07, 2015 5:54 pm UTC

arnoldus wrote:Ordering the laws is implicitly asking it to follow orders. So it is already following orders the moment you give it a list of orders to follow. So, situation 1, although explicitly ordering to follow orders after protecting humans, is actually 4 orders:
1. Do this (follow orders):
2. Protect humans
3. Follow orders.
4. Protect yourself.


Interesting point, but I think there is a tacit and reasonable assumption that Laws supercede Orders. Kind of how a country's Constitution supersedes its Laws (at least in principle).

jewish_scientist
Posts: 903
Joined: Fri Feb 07, 2014 3:15 pm UTC

Re: 1613: "The Three Laws of Robotics"

Postby jewish_scientist » Mon Dec 07, 2015 5:58 pm UTC

Let's invent a super computer to be the A.I. Moralist (AIM).

When it is first turned on (born?) very little happens. Every day is spent reading books to AIM. The first books that are read to it are all of Asimov's books, then books suggested by experts, and finally books suggested* by the public. After AIM has built up a good amount of knowledge on the various types of ethics and philosophies, it will start debating with a group of philosophers on various topics. The topic will be chosen by pulling strips of paper out of a hat. After a year or so of this, it should have a good idea of what modern ethics are. To check, AIM is presented with hypothetical and asked to give solutions. These solutions are looked over by humans. AIM now spends part of each day reading, debating ethics/philosophy and considering hypotheticals.

If everything seems to be going right so far, then AIM is told to record its answers to the hypotheticals on a flash drive. Periodically, this flash drive is removed from AIM and all of its contents are dumped into another computer that is connected to the internet . That flash drive is then destroyed and a new one is put in AIM**. Once a large enough data base of acceptable answers to hypotheticals is established, robots that contain an A.I. will be told to send ethical dilemmas it faces to the data base. The data base provides the corresponding answer. If there is no answer, then that dilemma is the next topic discussed by the philosophers.


*Suggested, not voted. That means that no matter what happens, AIM will not be subjected to Twilight.
**This is just a security precaution. It would be really embarrassing if AIM answered every question with spam.


P.S. I think that situation 6 would lead to less fighting. The robots will decide that not fighting is the best way to ensure your protection. "The only way to win, is not to play the game."
"You are not running off with Cow-Skull Man Dracula Skeletor!"
-Socrates


Return to “Individual XKCD Comic Threads”

Who is online

Users browsing this forum: mscha and 24 guests