The movie "I, Robot" is a muddled affair. It relies on shoddy pseudo-science and a general sense of unease that artificial (non-carbon based) intelligent life forms seem to provoke in us. But it goes no deeper than a comic book treatment of important themes that it broaches. I, Robot is just another - and relatively inferior - entry is a long line of far better movies, such as "Blade Runner" and "Artificial Intelligence".
Sigmund Freud said that we have an uncanny reaction to inanimate. This is probably because we know that – pretensions and layers of philosophizing aside – we are nothing but recursive, self aware, introspective, conscious machines. Special machines, no doubt, but machines all same.
Consider James bond movies. They constitute a decades-spanning gallery of human paranoia. Villains change: communists, neo-Nazis, media moguls. But one kind of villain is a fixture in this psychodrama, in this parade of human phobias: machine. James Bond always finds himself confronted with hideous, vicious, malicious machines and automata.
It was precisely to counter this wave of unease, even terror, irrational but all-pervasive, that Isaac Asimov, late Sci-fi writer (and scientist) invented Three Laws of Robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings, except where such orders would conflict with First Law. A robot must protect its own existence as long as such protection does not conflict with First or Second Laws. Many have noticed lack of consistency and, therefore, inapplicability of these laws when considered together.
First, they are not derived from any coherent worldview or background. To be properly implemented and to avoid their interpretation in a potentially dangerous manner, robots in which they are embedded must be equipped with reasonably comprehensive models of physical universe and of human society.
Without such contexts, these laws soon lead to intractable paradoxes (experienced as a nervous breakdown by one of Asimov's robots). Conflicts are ruinous in automata based on recursive functions (Turing machines), as all robots are. Godel pointed at one such self destructive paradox in "Principia Mathematica", ostensibly a comprehensive and self consistent logical system. It was enough to discredit whole magnificent edifice constructed by Russel and Whitehead over a decade.
Some argue against this and say that robots need not be automata in classical, Church-Turing, sense. That they could act according to heuristic, probabilistic rules of decision making. There are many other types of functions (non-recursive) that can be incorporated in a robot, they remind us.
True, but then, how can one guarantee that robot's behavior is fully predictable ? How can one be certain that robots will fully and always implement three laws? Only recursive systems are predictable in principle, though, at times, their complexity makes it impossible.
This article deals with some commonsense, basic problems raised by Laws. The next article in this series analyses Laws from a few vantage points: philosophy, artificial intelligence and some systems theories.
An immediate question springs to mind: HOW will a robot identify a human being? Surely, in a future of perfect androids, constructed of organic materials, no superficial, outer scanning will suffice. Structure and composition will not be sufficient differentiating factors.
There are two ways to settle this very practical issue: one is to endow robot with ability to conduct a Converse Turing Test (to separate humans from other life forms) - other is to somehow "barcode" all robots by implanting some remotely readable signaling device inside them (such as a RFID - Radio Frequency ID chip). Both present additional difficulties.
The second solution will prevent robot from positively identifying humans. He will be able identify with any certainty robots and only robots (or humans with such implants). This is ignoring, for discussion's sake, defects in manufacturing or loss of implanted identification tags. And what if a robot were to get rid of its tag? Will this also be classified as a "defect in manufacturing"?
In any case, robots will be forced to make a binary choice. They will be compelled to classify one type of physical entities as robots – and all others as "non-robots". Will non-robots include monkeys and parrots? Yes, unless manufacturers equip robots with digital or optical or molecular representations of human figure (masculine and feminine) in varying positions (standing, sitting, lying down). Or unless all humans are somehow tagged from birth.
These are cumbersome and repulsive solutions and not very effective ones. No dictionary of human forms and positions is likely to be complete. There will always be odd physical posture which robot would find impossible to match to its library. A human disk thrower or swimmer may easily be classified as "non-human" by a robot - and so might amputated invalids.
What about administering a converse Turing Test?
This is even more seriously flawed. It is possible to design a test, which robots will apply to distinguish artificial life forms from humans. But it will have to be non-intrusive and not involve overt and prolonged communication. The alternative is a protracted teletype session, with human concealed behind a curtain, after which robot will issue its verdict: respondent is a human or a robot. This is unthinkable.
Moreover, application of such a test will "humanize" robot in many important respects. Human identify other humans because they are human, too. This is called empathy. A robot will have to be somewhat human to recognize another human being, it takes one to know one, saying (rightly) goes.