## Turing Machines and Universes

Written by Sam Vaknin

In 1936 an American (Alonzo Church) and a Briton (Alan M. Turing) published independently (as is often coincidence in science) basics of a new branch in Mathematics (and logic): computability or recursive functions (later to be developed into Automata Theory).

The authors confined themselves to dealing with computations which involved "effective" or "mechanical" methods for finding results (which could also be expressed as solutions (values) to formulae). These methods were so called because they could, in principle, be performed by simple machines (or human-computers or human-calculators, to use Turing's unfortunate phrases). The emphasis was on finiteness: a finite number of instructions, a finite number of symbols in each instruction, a finite number of steps to result. This is why these methods were usable by humans without aid of an apparatus (with exception of pencil and paper as memory aids). Moreover: no insight or ingenuity were allowed to "interfere" or to be part of solution seeking process.

What Church and Turing did was to construct a set of all functions whose values could be obtained by applying effective or mechanical calculation methods. Turing went further down Church's road and designed "Turing Machine" – a machine which can calculate values of all functions whose values can be found using effective or mechanical methods. Thus, program running TM (=Turing Machine in rest of this text) was really an effective or mechanical method. For initiated readers: Church solved decision-problem for propositional calculus and Turing proved that there is no solution to decision problem relating to predicate calculus. Put more simply, it is possible to "prove" truth value (or theorem status) of an expression in propositional calculus – but not in predicate calculus. Later it was shown that many functions (even in number theory itself) were not recursive, meaning that they could not be solved by a Turing Machine.

No one succeeded to prove that a function must be recursive in order to be effectively calculable. This is (as Post noted) a "working hypothesis" supported by overwhelming evidence. We don't know of any effectively calculable function which is not recursive, by designing new TMs from existing ones we can obtain new effectively calculable functions from existing ones and TM computability stars in every attempt to understand effective calculability (or these attempts are reducible or equivalent to TM computable functions).

The Turing Machine itself, though abstract, has many "real world" features. It is a blueprint for a computing device with one "ideal" exception: its unbounded memory (the tape is infinite). Despite its hardware appearance (a read/write head which scans a two-dimensional tape inscribed with ones and zeroes, etc.) – it is really a software application, in today's terminology. It carries out instructions, reads and writes, counts and so on. It is an automaton designed to implement an effective or mechanical method of solving functions (determining truth value of propositions). If transition from input to output is deterministic we have a classical automaton – if it is determined by a table of probabilities – we have a probabilistic automaton.

With time and hype, limitations of TMs were forgotten. No one can say that Mind is a TM because no one can prove that it is engaged in solving only recursive functions. We can say that TMs can do whatever digital computers are doing – but not that digital computers are TMs by definition. Maybe they are – maybe they are not. We do not know enough about them and about their future.

Moreover, demand that recursive functions be computable by an UNAIDED human seems to restrict possible equivalents. Inasmuch as computers emulate human computation (Turing did believe so when he helped construct ACE, at time fastest computer in world) – they are TMs. Functions whose values are calculated by AIDED humans with contribution of a computer are still recursive. It is when humans are aided by other kinds of instruments that we have a problem. If we use measuring devices to determine values of a function it does not seem to conform to definition of a recursive function. So, we can generalize and say that functions whose values are calculated by an AIDED human could be recursive, depending on apparatus used and on lack of ingenuity or insight (the latter being, anyhow, a weak, non-rigorous requirement which cannot be formalized).

Quantum mechanics is branch of physics which describes microcosm. It is governed by Schrodinger Equation (SE). This SE is an amalgamation of smaller equations, each with its own space coordinates as variables, each describing a separate physical system. The SE has numerous possible solutions, each pertaining to a possible state of atom in question. These solutions are in form of wavefunctions (which depend, again, on coordinates of systems and on their associated energies). The wavefunction describes probability of a particle (originally, electron) to be inside a small volume of space defined by aforementioned coordinates. This probability is proportional to square of wavefunction. This is a way of saying: "we cannot really predict what will exactly happen to every single particle. However, we can foresee (with a great measure of accuracy) what will happen if to a large population of particles (where will they be found, for instance)."

## The Fourth Law of Robotics - Part I

Written by Sam Vaknin

The movie "I, Robot" is a muddled affair. It relies on shoddy pseudo-science and a general sense of unease that artificial (non-carbon based) intelligent life forms seem to provoke in us. But it goes no deeper than a comic book treatment of important themes that it broaches. I, Robot is just another - and relatively inferior - entry is a long line of far better movies, such as "Blade Runner" and "Artificial Intelligence".

Sigmund Freud said that we have an uncanny reaction to inanimate. This is probably because we know that – pretensions and layers of philosophizing aside – we are nothing but recursive, self aware, introspective, conscious machines. Special machines, no doubt, but machines all same.

Consider James bond movies. They constitute a decades-spanning gallery of human paranoia. Villains change: communists, neo-Nazis, media moguls. But one kind of villain is a fixture in this psychodrama, in this parade of human phobias: machine. James Bond always finds himself confronted with hideous, vicious, malicious machines and automata.

It was precisely to counter this wave of unease, even terror, irrational but all-pervasive, that Isaac Asimov, late Sci-fi writer (and scientist) invented Three Laws of Robotics:

A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings, except where such orders would conflict with First Law. A robot must protect its own existence as long as such protection does not conflict with First or Second Laws. Many have noticed lack of consistency and, therefore, inapplicability of these laws when considered together.

First, they are not derived from any coherent worldview or background. To be properly implemented and to avoid their interpretation in a potentially dangerous manner, robots in which they are embedded must be equipped with reasonably comprehensive models of physical universe and of human society.

Without such contexts, these laws soon lead to intractable paradoxes (experienced as a nervous breakdown by one of Asimov's robots). Conflicts are ruinous in automata based on recursive functions (Turing machines), as all robots are. Godel pointed at one such self destructive paradox in "Principia Mathematica", ostensibly a comprehensive and self consistent logical system. It was enough to discredit whole magnificent edifice constructed by Russel and Whitehead over a decade.

Some argue against this and say that robots need not be automata in classical, Church-Turing, sense. That they could act according to heuristic, probabilistic rules of decision making. There are many other types of functions (non-recursive) that can be incorporated in a robot, they remind us.

True, but then, how can one guarantee that robot's behavior is fully predictable ? How can one be certain that robots will fully and always implement three laws? Only recursive systems are predictable in principle, though, at times, their complexity makes it impossible.

This article deals with some commonsense, basic problems raised by Laws. The next article in this series analyses Laws from a few vantage points: philosophy, artificial intelligence and some systems theories.

An immediate question springs to mind: HOW will a robot identify a human being? Surely, in a future of perfect androids, constructed of organic materials, no superficial, outer scanning will suffice. Structure and composition will not be sufficient differentiating factors.

There are two ways to settle this very practical issue: one is to endow robot with ability to conduct a Converse Turing Test (to separate humans from other life forms) - other is to somehow "barcode" all robots by implanting some remotely readable signaling device inside them (such as a RFID - Radio Frequency ID chip). Both present additional difficulties.

The second solution will prevent robot from positively identifying humans. He will be able identify with any certainty robots and only robots (or humans with such implants). This is ignoring, for discussion's sake, defects in manufacturing or loss of implanted identification tags. And what if a robot were to get rid of its tag? Will this also be classified as a "defect in manufacturing"?

In any case, robots will be forced to make a binary choice. They will be compelled to classify one type of physical entities as robots – and all others as "non-robots". Will non-robots include monkeys and parrots? Yes, unless manufacturers equip robots with digital or optical or molecular representations of human figure (masculine and feminine) in varying positions (standing, sitting, lying down). Or unless all humans are somehow tagged from birth.

These are cumbersome and repulsive solutions and not very effective ones. No dictionary of human forms and positions is likely to be complete. There will always be odd physical posture which robot would find impossible to match to its library. A human disk thrower or swimmer may easily be classified as "non-human" by a robot - and so might amputated invalids.