miriam_e: from my drawing MoonGirl (Default)
[personal profile] miriam_e
I guess most science fiction readers would know of Isaac Asimov's 3 laws of robotics. How many of you realise that there is a fourth law? I didn't... till today. Apparently Isaac Asimov added a law zero to the other three:
0 - A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
1 - A robot may not injure a human being or through inaction allow a human being to come to harm.
2 - A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
3 - A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.

Date: 2006-04-22 01:29 am (UTC)
From: [identity profile] rpeate.livejournal.com
Yes, I learned of that during my I, Robot phase of some months ago. I always wonder how Law 2 works if two humans give conflicting orders. Is there a heirarchy of obedience? Would a robot obey an armed robber ("Hold this door shut!") as readily as a police officer ("Open this door now!"), or would it know the difference?

Zeroth Law

Date: 2006-04-22 01:56 am (UTC)
From: [identity profile] revbobbob.livejournal.com
Sure. It was in his later robots stories. Ahh, here it is. In Robots of Dawn. It also appears in the continuation of the Foundation series by the "Killer Bs".

Date: 2006-04-22 02:40 am (UTC)
ext_113523: (Default)
From: [identity profile] damien-wise.livejournal.com
I liked the simplicity of the original three and the ballsiness of distilling such an important ruleset down to three statements.

I can see why Asimov needed to add the zeroth law, though. It makes robots responsible for humans collectively. And, from an author's point of view, it gives him another chink of metaphysics/morality to play with. :)

And, yet, because the laws contain clauses that prohibit inactivity, robots can never take a neutral line (Swiss robots, anyone?) or avoid contemplating the situation they've been placed in.

This leads to some interesting possibilities for conflicts between the Laws -- which is the source of some of Asimov's more interesting stories / thought experiments.

The first example that pops into my head that features a conflict between the 0th and 1st law boils down to a refutation of utilitarianism.
Arguments such as "For the greater good" or "The needs of the many outweigh the needs of one" come to a quick conclusion.
Imagine a robot having an opportunity to save a large slice of humanity (accomplishing the 0th Law) by killing the next Hitler, yet stopping short because it opposes the First Law.
[Even if it had been unwittingly following orders to go to a certain place/time to bump into the despot, then needed to kill in self-defense.]

Date: 2006-04-22 08:12 am (UTC)
From: [identity profile] miriam-e.livejournal.com
I'm sure there are examples of Asimov's stories where people give conflicting orders. It has been a while since I read the robot stories.

(Love the Tron icon, by the way.)

Re: Zeroth Law

Date: 2006-04-22 08:22 am (UTC)
From: [identity profile] miriam-e.livejournal.com
Goody! I thought I'd read all his robot stories. I'm delighted to find out that I haven't. I'll have to remedy that. Thanks Bob.

I also didn't know there had been a continuation of the Foundation series by the "Killer Bs" (whoever they are). Okay... just looked and found info and pictures of authors and books:
http://www.wigglefish.com/zine/twentyquestions/killerb/
(oh I love Google so!)

Date: 2006-04-22 08:34 am (UTC)
From: [identity profile] miriam-e.livejournal.com
The three laws are really neat. Inventing them caused Mr Asimov much glee, I suspect, in finding new ways to circumvent them in his stories. :)

In the end I think the three laws will be unusable. Robots will never be simple enough to program straightforward laws like that if they are complex enough to wash the dishes and take out the trash and feed the dog. And by the time they get to be able to understand the laws in a conceptual fashion they will no longer apply anyway because they will then be equal to humans and deserving of emancipation... though I believe they won't want independance, even when they are our superiors. See my short story Love Honour and Obey on my website at
http://werple.net.au/~miriam/#stories
for an explanation of why.

Date: 2006-04-25 08:05 am (UTC)
thorfinn: <user name="seedy_girl"> and <user name="thorfinn"> (Default)
From: [personal profile] thorfinn
The lower numbered laws took priority - and if there was conflict at the same level, depending on the "complexity" of the robot's brain (i.e., how discriminatory it could be in evaluating that conflict), the robot could go into brainlock and break.

For your example - the conflict could be resolved by looking up at rule 1 instead of rule 2. If the armed robber has his weapon out and is likely to do harm to someone, and the policeman is likely to stop him, then by rule 1, the robot opens the door. OTOH, if the armed robber has put his gun away, and the policeman has his gun out and looks likely to shoot, then also by rule 1, the robot keeps the door closed.

Profile

miriam_e: from my drawing MoonGirl (Default)
miriam_e

December 2025

S M T W T F S
 123456
7 8 910 111213
1415 1617181920
21222324252627
28293031   

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 25th, 2025 06:03 am
Powered by Dreamwidth Studios