The three laws of Robotics

Latest post in my blog on popular science:

The three laws of Robotics
http://populscience.blogspot.com/2020/01/the-three-laws-of-robotics.html

Regards,

Actually, Asimov’s robotics laws are common sense, altho, as you note, they cannot be rigorously proven due to what Goedel said. BUT, be that as it may, the REAL contribution which Asimov made in re robots, was the warning that the owners of the robots would become lazy and let the robots do all the work so the owners could goof off, which is exactly what happened in re the Spacer Worlds, which led to their decline and death. THAT is what we need to stress NOW so the same thing does not happen to us!!

The First Law is far too broad for even humans to consistently apply, especially if you expand the meaning of “harm” past direct physical harm–but even without that issue, the “through inaction” part raises a whole host of issues. It’s noteworthy that human law, at least in the English tradition, has no such requirement–I have no legal obligation, in most cases, to take any action at all (however trivial) to prevent harm (however great) to another person. When you impose a duty to act, you inevitably raise issues of foreseeability, leading to another ubiquitous sci-fi trope: the Butterfly Effect. Its better-known forms deal with the long-term consequences of apparently-trivial acts, but we see all the time that actions have unexpected consequences even in the fairly short term.

But even ignoring that problem as well, it’s trivial to imagine circumstances in which the rule conflicts with itself, perhaps most famously the Trolley Problem. Asimov would not have been unaware of such questions, of course, and the frequency with which such internal conflicts arose in his writings leads one to believe that it was intentional–rather than to set forth “these should be the laws for robots”, the idea may instead have been “this is how such apparently-common-sense rules quickly become unworkable.”

True, in his short stories about robots Asimov often plans situations where the laws conflict with one another, or even with themselves. But he also took them seriously. That’s why, as he himself tells, while he was watching the premiere of “2001, a space odyssey” he was increasingly nervous and suddenly exclaimed: “But they are violating the first Law!” And then his friend (I seem to remember it was Sprague de Champ) told him: “Why don’t you fulminate them with a lightning bolt, Isaac?” which calmed him.

Regards,

Yes, and of course he’d forgotten that the laws were his invention, and nobody else’s notional robots necessarily had to be programmed that way. But even in the framework of Asimov’s laws, HAL didn’t necessarily violate them (or at least wasn’t responsible for having violated them), for at least two possible reasons:

  • HAL considered the mission (which only he fully knew) to be of the utmost importance for all mankind, and saw Dave and Frank as threatening the success of the mission–in that case, the zeroth law would trump the first. Though I’m not sure how killing the rest of the crew fits in with this one…
  • Alternatively, HAL was simply insane. Insanity is properly recognized as a defense against criminal charges for human beings; if we’re going treat AIs as autonomous and morally culpable actors, a similar defense should be allowed.

But even in the framework of Asimov’s laws, HAL didn’t necessarily violate them (or at least wasn’t responsible for having violated them), for at least two possible reasons:

  • HAL considered the mission (which only he fully knew) to be of the utmost importance for all mankind, and saw Dave and Frank as threatening the success of the mission–in that case, the zeroth law would trump the first. Though I’m not sure how killing the rest of the crew fits in with this one…
  • Alternatively, HAL was simply insane. Insanity is properly recognized as a defense against criminal charges for human beings; if we’re going treat AIs as autonomous and morally culpable actors, a similar defense should be allowed.
  • You have pin-pointed the problem with your first explanation. Another problem is that there isn’t a zero law. You are assuming that the welfare of humanity should be above the life of a single human being, but that’s not implied by Asimov’s laws, and by the way, it’s Caiaphas’s argument to condemn Christ.

  • I’m not sure how a programmed machine could be insane. That could be a good subject for another post in my blog. Thank you (:slight_smile:

Regards,

From your paper, page 2 of the PDF:

To address scenarios in which robots take responsibility to-wards human populations, Asimov later added an additional, zeroth, law.
0. A robot may not harm humanity or, through inaction, allow humanity to come to harm.

Perhaps this was done after he watched the premiere of 2001.

I make no such assumption; that’s the natural result of the sequence of the zeroth and first laws–the good of humanity would supersede the good of an individual human. It is a position I’m generally inclined to agree with, but I don’t believe I’d accept it as an absolute principle–but I don’t know that what I think on that particular question is necessarily all that interesting.

Neither am I, but HAL was, to all appearances, something with which we as yet have no experience–a completely self-aware, autonomous individual, while still being a machine[1]. While we have all kinds of examples of such things in various forms of literature, we don’t have anything at all like it in the real world. It’s possible, of course, that this was simply a very effective illusion–HAL wasn’t self-aware at all, but simply programmed to (very convincingly) behave that way, and he was simply doing precisely what he had been programmed to do. In that case, of course, the concept of insanity is nonsensical, but so is the idea of HAL’s being culpable for violating the First Law.


  1. Lewis, I suspect, would have considered such a thing inherently impossible–he argued, I believe in Mere Christianity, that under a view of pure materialism (i.e., people are purely matter and have no spirit/soul), human behavior would necessarily be deterministic. Under that position (which I find quite persuasive), any electronic or mechanical (or even optical; I understand optical computers are a thing) device would similarly act in a deterministic way. It’s the soul that allows true (even if limited) autonomy. Thus, unless we’re prepared to believe that HAL had a soul, we’d have to conclude that his apparent independence and autonomy were nothing more than very skillful illusions. ↩︎

Ha! You caught me! I had forgotten this.

Regards,