Turing Invents the Universal Turing Machine Summary

  • Last updated on November 10, 2022

Alan Mathison Turing invented a precise concept of an abstract computing machine, providing a basis for both the theory of computation and the development of digital computers.

Summary of Event

In 1931, Kurt Gödel proved that arithmetic (and mathematics) is either inconsistent or incomplete, meaning that any consistent axiom system for arithmetic must fail to include some true arithmetical statements. This result answered the first two of the three questions that David Hilbert had posed three years earlier: Is mathematics complete? Is it consistent? Is it decidable? The last question asks if there is a definite procedure (algorithm) that is guaranteed to produce a correct answer as to whether an arbitrary mathematical sentence is true. After Gödel’s proof, there was much speculation that mathematics must be undecidable. In the late spring of 1935, shortly after having been made a fellow at King’s College, Cambridge, Alan Mathison Turing set out to prove it. To do so, Turing needed a precise concept of “definite procedure,” which led him to invent what are now called Turing machines. [kw]Turing Invents the Universal Turing Machine (1935-1936) [kw]Universal Turing Machine, Turing Invents the (1935-1936) [kw]Turing Machine, Turing Invents the Universal (1935-1936) [kw]Machine, Turing Invents the Universal Turing (1935-1936) Computers;development Turing machine Universal Turing machine [g]England;1935-1936: Turing Invents the Universal Turing Machine[08810] [c]Computers and computer science;1935-1936: Turing Invents the Universal Turing Machine[08810] [c]Mathematics;1935-1936: Turing Invents the Universal Turing Machine[08810] [c]Science and technology;1935-1936: Turing Invents the Universal Turing Machine[08810] Turing, Alan Mathison Church, Alonzo Hilbert, David Gödel, Kurt Von Neumann, John

A Turing machine is an abstract machine rather than a physical one: It is used to define or describe algorithms precisely, not to execute them. Any Turing machine has three components: a control unit, which takes any of a finite number of states; a two-way infinite tape, which is divided into squares capable of holding any one of a finite number of symbols; and a read-write head, which writes a symbol, moves left or right over the tape, and reads the new square. The behavior of such a machine on the next step is fully determined by its current state and the symbol being read. Specific Turing machines are described with tables of quintuples, such as 〈Q0, A, B, R, Q1〉, which means: If the machine is in state Q0 and reading the symbol A, then write the symbol B (replacing the symbol A), move right on the square on the tape, and go into state Q1. It can be stipulated that a Turing machine will halt when it enters a state for which no quintuple exists.

As a simple example, the following represents a machine that determines the parity of a string of zeroes and ones, when the read-write head starts at the rightmost digit (that is, the machine will report “1” if the string has an odd number of ones; otherwise, it reports “0”):

〈Q0, 0, 0, L, Q0〉

〈Q0, 1, 1, L, Q1〉

〈Q1, 0, 0, L, Q1〉

〈Q1, 1, 1, L, Q0〉

〈Q0, b, 0, L, Q2〉

〈Q1, b, 1, L, Q2〉

In this machine, state Q0 signifies that the string has even parity so far, and Q1 signifies odd parity. The first tuple scans a zero in the even-parity state; the read-write head moves left, and the machine stays in even parity. The next tuple scans a one and so changes the state from even to odd parity. The next two tuples are similar but describe the machine’s behavior when in the odd-parity state. The final two states recognize the end of the string (b stands for the blank symbol), write the answer (at the head of the input string), and stop (by entering the undefined state Q2).

One of the remarkable things about Turing machines is that they are universal; that is, they are capable of performing the computations done by any other Turing machine. This is achieved through the encoding of that other machine’s table of quintuples in a numerical string (called the Gödel number of the machine); then, this number can be written on the universal machine’s input tape. The universal machine simulates the target machine by examining and interpreting the appropriate quintuple on the tape. This encoding of the target machine is analogous to software programs on modern digital computers.

With this basis, Hilbert’s third question can be answered by the proof that mathematics is undecidable. For any Turing machine, the question can be asked whether, when given a certain input, the machine will halt. One also can ask the broader question of whether there is a Turing machine-named halter that, when given the Gödel number for a target machine and that machine’s input, can decide in a finite number of steps whether the target machine would stop with that input. (This is called the halting problem.) If one assumes that halter exists, one can ask the self-referential question: What happens if halter encounters the Gödel number for itself? Turing proved that in such a case, halter will halt if and only if it does not halt; in other words, whichever way one turns, there is a contradiction, so halter cannot exist. Turing also demonstrated that there are mathematical statements for which any decision procedure would presuppose the existence of halter; thus he established the existence of undecidable mathematical statements.

Turing presented these results in a paper titled “On Computable Numbers, with an Application to the Entscheidungsproblem” in 1936. Shortly before, however, Alonzo Church had published his proof of the undecidability of mathematics using a different formulation called the lambda calculus. Because the two methods of proof were substantially different, Turing’s paper was accepted for publication. First, however, Turing was obliged to show in an appendix that Turing computability and Church’s definition of effective calculability were equivalent. Both papers argued for the Church-Turing thesis Church-Turing thesis[Church Turing thesis] (sometimes called Church’s thesis), which asserts that their equivalent concepts of computability precisely capture the intuitive concept of an effective procedure or definite algorithm. Given that this intuitive concept cannot itself be exactly specified, the thesis is unprovable. Nevertheless, it is a remarkable fact that every adequate substitute for “effective procedure” has been proved to be equivalent, including the contemporaneous formulations by Emil Post and by Stephen Kleene (the latter using general recursive functions).

Furthermore, every coherent extension of the concept of Turing machines has been shown to possess exactly the same computational power as Turing machines themselves, as long as they adhere to the same standard of definiteness—for instance, not allowing randomized state changes. (Note that “power” here refers to the range of functions that can be computed and not to the informal notion of the speed of computation.) For example, adding multiple tapes or multiple read-write heads per tape in no way increases the set of functions that a Turing machine can compute, although some computations will become faster. Von Neumann machines—everyday office computers and personal computers—are likewise no more powerful than Turing machines (for that reason they sometimes are mistakenly called Turing machines).

Of course, some machines—such as finite automata—are not as powerful as Turing machines. The convergence of all sufficiently general kinds of machines on the class identified by Church and Turing seems explainable only if one assumes the truth of the Church-Turing thesis.


The invention of the Turing machine provided a precise concept of computation, which helped to pave the way for the introduction of practical computing machinery. While the Turing machine as such would make an impractical computing device (given that storing everything on tape would be impossibly slow), it anticipated much of the function required for practical computers. During and immediately after World War II, various engineering efforts produced the first working electronic digital computers. Especially noteworthy are the Electronic Numerical Integrator and Calculator (ENIAC), developed from 1943 to 1946 at the University of Pennsylvania, and the Automatic Computing Engine (ACE) at England’s National Physical Laboratory, designed by Turing in 1945. The ENIAC led to a more general computer design by John von Neumann that has had a pervasive effect in the computer industry; most modern computers are classified now as von Neumann machines.

Beyond providing a model for practical computation, the concept of Turing computability has supported the development of much of the theory of computer science. In addition to its bearing on the nature of unsolvable problems, Turing computability has been instrumental in the study of the degrees of solvability of problems—more accurately, the computational complexity of problems—a major research area in theoretical computer science. Computational complexity is measured by the amount of time (number of steps) required by an optimal Turing machine to solve a problem. The complexity of a problem is expressed as a function of its size; thus the size of the problem to search sequentially a list of N items to find a specific item is merely N; the time it takes, on average, is directly proportional to N/2. Two classes of complexity are of special interest: polynomial time problems and exponential time problems. Polynomial time problems require a polynomial function of the problem size to complete—such as N/2, N, N2 , and so on. Exponential time problems require 2 to the Nth power, or 2 to the N2 power, and so on, to complete. Exponential problems always grow faster than polynomial problems in the computation time required, and so they are often called intractable. An important set of problems, called NP problems, are defined in terms of a special class of Turing machine. It is an unproved, but widely believed, thesis that many NP problems are necessarily exponential.

Given that Gödel, Turing, and others collectively proved that mathematics cannot be completely mechanized, it might be thought that Turing should have been pessimistic about the prospects for artificial intelligence—that is, the attempt to mechanize intelligence. In fact, however, Turing believed that whatever limits apply to machines in this regard apply equally to humans—that neither humans nor machines can have access to all mathematical truths or to an infallible decision procedure. Turing, in his 1950 paper “Computing Machinery and Intelligence,” created the well-known Turing test for intelligence, which has provided the main backdrop for philosophical inquiries in artificial intelligence since that time. In recognition of Turing’s achievements, the Association for Computing Machinery named its most prestigious annual research prize the ACM Turing Award. Computers;development Turing machine Universal Turing machine

Further Reading
  • citation-type="booksimple"

    xlink:type="simple">Ceruzzi, Paul E. A History of Modern Computing. 2d ed. Cambridge, Mass.: MIT Press, 2003. A straightforward, comprehensive history of computing. Places the technology in its appropriate social context.
  • citation-type="booksimple"

    xlink:type="simple">Davis, Martin, ed. The Undecidable. New York: Raven, 1965. A collection of many of the seminal articles in the development of computation theory and logic. Includes original papers alluded to above by Church, Gödel, Kleene, and Post. These papers are accessible to those already acquainted with formal logic. Reprints the original paper by Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem.” Proceedings of the London Mathematical Society 42 (January, 1937): 230-265.
  • citation-type="booksimple"

    xlink:type="simple">Dewdney, A. K. The Turing Omnibus. New York: Owl Books, 1993. Collection of short and engaging tutorials on sixty-one different topics in computer science by the “Mathematical Recreations” columnist for Scientific American. Chapter titled “The Halting Problem” presents an informative discussion of that issue.
  • citation-type="booksimple"

    xlink:type="simple">Garey, Michael R., and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. San Francisco: W. H. Freeman, 1979. A standard, introductory college text on computational complexity. Provides a very accessible survey for those with some college-level mathematics or logic. Contains an exhaustive list of NP-complete problems known as of 1979.
  • citation-type="booksimple"

    xlink:type="simple">Hodges, Andrew. Alan Turing: The Enigma. New York: Simon & Schuster, 1983. A thorough, enjoyable biography of Turing. Includes a lengthy, fascinating account of Turing’s successful cracking of Nazi Germany’s encryption devices. Also provides background material to Turing’s foundational work on artificial intelligence in the late 1940’s.
  • citation-type="booksimple"

    xlink:type="simple">Hofstadter, Douglas R. Gödel, Escher, Bach: An Eternal Golden Braid. New York: Basic Books, 1999. The concepts of self-reference, recursion, and infinite regress are woven into an absorbing tapestry. Explicates Gödel’s incompleteness theorem, drawing analogies with M. C. Escher’s reflexive drawings and the musical edifices of Bach. Nontechnical but lengthy.
  • citation-type="booksimple"

    xlink:type="simple">Minksky, Marvin. Computation: Finite and Infinite Machines. Englewood Cliffs, N.J.: Prentice-Hall, 1967. An especially clear introduction to automata and computation theory, used as a freshman college textbook. Presupposes no college-level mathematics, but it does introduce and use formal notation. Minsky is a leading figure in artificial intelligence.
  • citation-type="booksimple"

    xlink:type="simple">Turing, Alan. “Computing Machinery and Intelligence.” Mind 59 (October, 1950): 433-460. An exploration of the concepts of human and machine intelligence. Introduces the famous Turing Test for intelligence. Reprinted frequently; for example, in Alan Ross Anderson, ed. Minds and Machines. Englewood Cliffs, N.J.: Prentice-Hall, 1964.

Brouwer Develops Intuitionist Foundations of Mathematics

Markov Discovers the Theory of Linked Probabilities

Principia Mathematica Defines the Logistic Movement

Noether Shows the Equivalence of Symmetry and Conservation

IBM Changes Its Name and Product Line

Bush Builds the First Differential Analyzer

Gödel Proves Incompleteness-Inconsistency for Formal Systems

Bourbaki Group Publishes ÉLÉments de mathématique

Categories: History Content