Most computer users have become so accustomed to the standard von Neumann approach to computing that they rarely question the fundamental assumptions implicit in the underlying architecture. Foremost among these is the idea that a computer basically consists of a powerful and sophisticated central processing unit (CPU) with lots of peripheral memory. This model-common to the vast majority of computers to date-has at least two important implications. First, it means that as successive generations of CPUs and memory become progressively faster, the communication between them becomes the major bottleneck. Second, it means that using the computer always comes down to instructing the CPU to perform some particular sequence of operations involving various portions of memory. This latter point, in particular, has attracted much attention from computer scientists because it burdens the computer programmer with a great deal of unnecessary tedium. Instead of merely informing the computer as to what should constitute an acceptable solution to a problem, the programmer must painstakingly tell the computer how to obtain the solution (Balaban and Murray 1986). The continuing evolution of high-level languages (e.g., Lisp, Smalltalk, Prolog) can be seen as an ongoing attempt to transfer this undesirable complexity from the domain of the programmer to that of the computer itself. Thus, constraint languages such as TK!Solver allow the computer to arrive at solutions simply by evaluating user-specified constraints (Levitt 1984). Even here, the programmer must still inform the computer explicitly of each operative constraint.