Abstract

The standard course in theory of computation introduces students to Turing machines and computability theory. This model prescribes what can be computed, and what cannot be computed, but the negative results have far more consequences. To take the common example, suppose an operating systems designer wants to determine whether or not a program will halt given enough memory or other resources. Even a Turing machine program cannot be designed to solve this problem---and Turing machines have far more memory than any physical computer. The negative results of computability theory are also robust (a principle enshrined as Church's thesis): since many other models of computation, including λ-calculus, Post systems, and µ-recursive functions, compute the same class of functions on the natural numbers, negative results from one description apply to all other descriptions.But the discipline of programming and the architecture of modern computers impose other constraints on what can be computed. The constraints are ubiquitous. For example, a combination of hardware and software in operating systems prevents programs from manipulating protected data structures except through the system interface. In programming languages, there are programs that "cannot be written," e.g., a sort procedure in Pascal that works on arrays of any size. In databases, there is no Datalog program to calculate the parity of a relation (see [1]). Each of these settings involves a uniprocessor machine, but the constraints become even more pronounced in distributed systems: for instance, there is no mutual exclusion protocol for n processors using fewer than n atomic read/write registers [5]. All of these problems are computable in Turing's sense: one can encode each of these problems as computation over the natural numbers, and one can write programs to solve the problems. So in what sense is Church's thesis applicable? It is important to remember that computability theory only describes properties of the set of computable functions on the natural numbers (although there have been attempts to extend computability theory and complexity theory to higher-order functions; see, e.g., [13, 12, 20].) If one adopts computability theory as the only theory of computation, one is naturally forced to encode other forms of computation as functions on the natural numbers. Alan Perlis' phrase "Turing tarpit" highlights this potential misuse of computability theory: the encoding of computation into one framework forces many relevant distinctions to become lost.Any attempt to explain other computing constraints must necessarily look for theories beyond computability theory. Semantics aims to fill this niche: it is the mathematical analysis and synthesis of programming structures. The definition is admittedly broad and not historically based: semantics was originally a means of describing programming languages, and the definition covers areas not usually called "semantics." This essay attempts to flesh out this definition of semantics with examples, comparisons, and sources of theories. While most of the ideas will be familiar to the practicing semanticist, the perspective may be helpful to those in and out of the field.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call