# Turing machine equivalents (Ofer Abarbanel online library)

A Turing machine is a hypothetical computing device, first conceived by Alan Turing in 1936. Turing machines manipulate symbols on a potentially infinite strip of tape according to a finite table of rules, and they provide the theoretical underpinnings for the notion of a computer algorithm.

While none of the following models have been shown to have more power than the single-tape, one-way infinite, multi-symbol Turing-machine model, their authors defined and used them to investigate questions and solve problems more easily than they could have if they had stayed with Turing’s a-machine model.

Machines equivalent to the Turing machine model

Turing equivalence

Many machines that might be thought to have more computational capability than a simple universal Turing machine can be shown to have no more power.[1] They might compute faster, perhaps, or use less memory, or their instruction set might be smaller, but they cannot compute more powerfully (i.e. more mathematical functions). (The Church–Turing thesis hypothesizes this to be true: that anything that can be “computed” can be computed by some Turing machine.)

The sequential-machine models

All of the following are called “sequential machine models” to distinguish them from “parallel machine models”.[2]

Register machine models

Peter van Emde Boas includes all machines of this type in one class, “the register machine”.[2] However, historically the literature has also called the most primitive member of this group i.e. “the counter machine” – “the register machine”. And the most primitive embodiment of a “counter machine” is sometimes called the “Minsky machine”.

The “counter machine”, also called a “register machine” model

The primitive model register machine is, in effect, a multitape 2-symbol Post–Turing machine with its behaviour restricted so its tapes act like simple “counters”.

By the time of Melzak, Lambek, and Minsky the notion of a “computer program” produced a different type of simple machine with many left-ended tapes cut from a Post–Turing tape. In all cases the models permit only two tape symbols { mark, blank }.[3]

Some versions represent the positive integers as only a strings/stack of marks allowed in a “register” (i.e. left-ended tape), and a blank tape represented by the count “0”. Minsky eliminated the PRINT instruction at the expense of providing his model with a mandatory single mark at the left-end of each tape.[3]

In this model the single-ended tapes-as-registers are thought of as “counters”, their instructions restricted to only two (or three if the TEST/DECREMENT instruction is atomised). Two common instruction sets are the following:

(1): { INC ( r ), DEC ( r ), JZ ( r,z ) }, i.e.

{ INCrement contents of register #r; DECrement contents of register #r; IF contents of #r=Zero THEN Jump-to Instruction #z}

(2): { CLR ( r ); INC ( r ); JE ( ri, rj, z ) }, i.e.

{ CLeaR contents of register r; INCrement contents of r; compare contents of ri to rj and if Equal then Jump to instruction z}

Although his model is more complicated than this simple description, the Melzak “pebble” model extended this notion of “counter” to permit multi- pebble adds and subtracts.

The random-access machine (RAM) model

Melzak recognised a couple serious defects in his register/counter-machine model:[5] (i) Without a form of indirect addressing he would not be able to “easily” show the model is Turing equivalent, (ii) The program and registers were in different “spaces”, so self-modifying programs would not be easy. When Melzak added indirect addressing to his model he created a random access machine model.

(However, with Gödel numbering of the instructions Minsky offered a proof that with such numbering the general recursive functions were indeed possible; he offers proof that μ recursion is indeed possible[3]).

Unlike the RASP model, the RAM model does not allow the machine’s actions to modify its instructions. Sometimes the model works only register-to-register with no accumulator, but most models seem to include an accumulator.

van Emde Boas divides the various RAM models into a number of sub-types:[2]

SRAM, the “successor RAM” with only one arithmetic instruction, the successor (INCREMENT h). The others include “CLEAR h”, and an IF equality-between-register THEN jump-to xxx.

RAM: the standard model with addition and subtraction

MRAM: the RAM augmented with multiplication and division

BRAM, MBRAM: Bitwise Boolean versions of the RAM and MRAM

N****: Non-deterministic versions of any of the above with an N before the name

The random-access stored program (RASP) machine model

The RASP is a RAM with the instructions stored together with their data in the same ‘space’ – i.e. sequence of registers. The notion of a RASP was described at least as early as Kiphengst. His model had a “mill”—an accumulator, but now the instructions were in the registers with the data—the so-called von Neumann architecture. When the RASP has alternating even and odd registers—the even holding the “operation code” (instruction) and the odd holding its “operand” (parameter), then indirect addressing is achieved by simply modifying an instruction’s operand.[6]

The original RASP model of Elgot and Robinson had only three instructions in the fashion of the register-machine model,[7] but they placed them in the register space together with their data. (Here COPY takes the place of CLEAR when one register e.g. “z” or “0” starts with and always contains 0. This trick is not unusual. The unit 1 in register “unit” or “1” is also useful.)

{ INC ( r ), COPY ( ri, rj ), JE ( ri, ri, z ) }

The RASP models allow indirect as well as direct-addressing; some allow “immediate” instructions too, e.g. “Load accumulator with the constant 3”. The instructions may be of a highly restricted set such as the following 16 instructions of Hartmanis.[8] This model uses an accumulator A. The mnemonics are those that the authors used (their CLA is “load accumulator” with constant or from register; STO is “store accumulator”). Their syntax is the following, excepting the jumps: “n, <n>, <<n>>” for “immediate”, “direct” and “indirect”). Jumps are via two “Transfer instructions” TRA—unconditional jump by directly “n” or indirectly “< n >” jamming contents of register n into the instruction counter, TRZ (conditional jump if Accumulator is zero in the same manner as TRA):

{ ADD n , ADD < n >, ADD << n >>, SUB n, SUB < n >, SUB << n >>, CLA n, CLA < n >, CLA << n >>, STO < n >, STO << n >>, TRA n, TRA < n >, TRZ n, TRA < n >, HALT }

The Pointer machine model

A relative late-comer is Schönhage’s Storage Modification Machine or pointer machine. Another version is the Kolmogorov-Uspensii machine, and the Knuth “linking automaton” proposal. (For references see pointer machine). Like a state-machine diagram, a node emits at least two labelled “edges” (arrows) that point to another node or nodes which in turn point to other nodes, etc. The outside world points at the center node.

References

^ John Hopcroft and Jeffrey Ullman (1979). Introduction to Automata Theory, Languages and Computation (1st ed.). Addison–Wesley, Reading Mass. ISBN 0-201-02988-X.

^ Jump up to:a b c Peter van Emde Boas, Machine Models and Simulations; Jan van Leeuwen, ed. Handbook of Theoretical Computer Science. Volume A: Algorithms and Complexity, p. 3-66, The MIT Press/Elsevier, 1990. ISBN 0-262-72014-0 (volume A). QA76.H279 1990.

^ Jump up to:a b c d Marvin Minsky, Computation: Finite and Infinite Machines, Prentice–Hall, Inc., N.J., 1967. See Chapter 8, Section 8.2 “Unsolvability of the Halting Problem.”

^ Pippenger, Nicholas; Fischer, Michael J. (1979), “Relations Among Complexity Measures”, J ACM, 26 (3): 361–381, doi:10.1145/322123.322138

^ Z. A. Melzak, An informal Arithmetical Approach to Computability and Computation, Canadian Mathematical Bulletin, vol. 4, no. 3. September 1961 pages 279–293.

^ Stephen A. Cook and Robert A. Reckhow (1972), Time-bounded random access machines, Journal of Computer Systems Science 7 (1973), 354–375.

^ Calvin Elgot and Abraham Robinson (1964), Random-Access Stored-Program Machines, an Approach to Programming Languages, Journal of the Association for Computing Machinery, Vol. 11, No. 4 (October 1964), pp. 365–399.

^ J. Hartmanis (1971), “Computational Complexity of Random Access Stored Program Machines,” Mathematical Systems Theory 5, 3 (1971) pp. 232–245.

^ Christos Papadimitriou (1993). Computational Complexity (1st ed.). Addison Wesley. ISBN 0-201-53082-1. Chapter 2: Turing machines, pp. 19–56.

^ A. Schōnhage (1980), Storage Modification Machines, Society for Industrial and Applied Mathematics, SIAM J. Comput. Vol. 9, No. 3, August 1980.

^ Marvin Minsky (15 August 1960). “Recursive Unsolvability of Post’s Problem of ‘Tag’ and Other Topics in Theory of Turing Machines”. Annals of Mathematics. Annals of Mathematics. 74(3): 437–455. doi:10.2307/1970290. JSTOR 1970290.

^ John C. Shepherdson and H. E. Sturgis received December 1961 Computability of Recursive Functions, Journal of the Association of Computing Machinery (JACM) 10:217-255, 1963

Ofer Abarbanel – Executive Profile

Ofer Abarbanel online library

Ofer Abarbanel online library

Ofer Abarbanel online library

Ofer Abarbanel online library