Continued from page 1
Let us assume that by some miraculous way problem is overcome and robots unfailingly identify humans. The next question pertains to notion of "injury" (still in First Law). Is it limited only to physical injury (the elimination of physical continuity of human tissues or of normal functioning of human body)?
Should "injury" in First Law encompass no less serious mental, verbal and social injuries (after all, they are all known to have physical side effects which are, at times, no less severe than direct physical "injuries")? Is an insult an "injury"? What about being grossly impolite, or psychologically abusive? Or offending religious sensitivities, being politically incorrect - are these injuries? The bulk of human (and, therefore, inhuman) actions actually offend one human being or another, have potential to do so, or seem to be doing so.
Consider surgery, driving a car, or investing money in stock exchange. These "innocuous" acts may end in a coma, an accident, or ruinous financial losses, respectively. Should a robot refuse to obey human instructions which may result in injury to instruction-givers?
Consider a mountain climber – should a robot refuse to hand him his equipment lest he falls off a cliff in an unsuccessful bid to reach peak? Should a robot refuse to obey human commands pertaining to crossing of busy roads or to driving (dangerous) sports cars?
Which level of risk should trigger robotic refusal and even prophylactic intervention? At which stage of interactive man-machine collaboration should it be activated? Should a robot refuse to fetch a ladder or a rope to someone who intends to commit suicide by hanging himself (that's an easy one)?
Should he ignore an instruction to push his master off a cliff (definitely), help him climb cliff (less assuredly so), drive him to cliff (maybe so), help him get into his car in order to drive him to cliff... Where do responsibility and obeisance bucks stop?
Whatever answer, one thing is clear: such a robot must be equipped with more than a rudimentary sense of judgment, with ability to appraise and analyse complex situations, to predict future and to base his decisions on very fuzzy algorithms (no programmer can foresee all possible circumstances). To me, such a "robot" sounds much more dangerous (and humanoid) than any recursive automaton which does NOT include famous Three Laws.
Moreover, what, exactly, constitutes "inaction"? How can we set apart inaction from failed action or, worse, from an action which failed by design, intentionally? If a human is in danger and robot tries to save him and fails – how could we determine to what extent it exerted itself and did everything it could?
How much of responsibility for a robot's inaction or partial action or failed action should be imputed to manufacturer – and how much to robot itself? When a robot decides finally to ignore its own programming – how are we to gain information regarding this momentous event? Outside appearances can hardly be expected to help us distinguish a rebellious robot from a lackadaisical one.
The situation gets much more complicated when we consider states of conflict.
Imagine that a robot is obliged to harm one human in order to prevent him from hurting another. The Laws are absolutely inadequate in this case. The robot should either establish an empirical hierarchy of injuries – or an empirical hierarchy of humans. Should we, as humans, rely on robots or on their manufacturers (however wise, moral and compassionate) to make this selection for us? Should we abide by their judgment which injury is more serious and warrants an intervention?
A summary of Asimov Laws would give us following "truth table":
A robot must obey human commands except if:
Obeying them is likely to cause injury to a human, or Obeying them will let a human be injured. A robot must protect its own existence with three exceptions:
That such self-protection is injurious to a human; That such self-protection entails inaction in face of potential injury to a human; That such self-protection results in robot insubordination (failing to obey human instructions). Trying to create a truth table based on these conditions is best way to demonstrate problematic nature of Asimov's idealized yet highly impractical world.
Here is an exercise:
Imagine a situation (consider example below or one you make up) and then create a truth table based on above five conditions. In such a truth table, "T" would stand for "compliance" and "F" for non-compliance.
Example:
A radioactivity monitoring robot malfunctions. If it self-destructs, its human operator might be injured. If it does not, its malfunction will equally seriously injure a patient dependent on his performance.
One of possible solutions is, of course, to introduce gradations, a probability calculus, or a utility calculus. As they are phrased by Asimov, rules and conditions are of a threshold, yes or no, take it or leave it nature. But if robots were to be instructed to maximize overall utility, many borderline cases would be resolved.
Still, even introduction of heuristics, probability, and utility does not help us resolve dilemma in example above. Life is about inventing new rules on fly, as we go, and as we encounter new challenges in a kaleidoscopically metamorphosing world. Robots with rigid instruction sets are ill suited to cope with that.
Sam Vaknin is the author of Malignant Self Love - Narcissism Revisited and After the Rain - How the West Lost the East. He is a columnist for Central Europe Review, United Press International (UPI) and eBookWeb and the editor of mental health and Central East Europe categories in The Open Directory, Suite101 and searcheurope.com.
Visit Sam's Web site at http://samvak.tripod.com