Pattern recognition by human beings and infra human organisms is remarkable chiefly because it can take place with speed and with a relatively low probability of error even when the incoming information has been subjected to a wide variety of transformations and extreme degradation. Recent attempts to build automata to recognize patterns have served to increase, rather than diminish respect for this capacity of living systems. Almost without exception, the existing, commercially available automata recognize patcerns only under conditions so restricted that a living organism would soon perish were it necessary for it so to limit the class of inputs co which it could respond with reasonable accuracy. Commercial number reading machines, for example, usually require chat characters come from a specially designed type fonc, that they be of a certain size, and chat they occur only on certain portions of the document to be read. The human reader does not operate under such restrictions. H e can read numbers from many type fonts, even from fonts which he has never before seen; nor does he have to be told the name of the fonc in order to be able to read it. The characters to be read do not have to be in one particular part of the page, nor do they need to be of one particular size. This human facility in pattern recognition has raised two sorts of problems. First it has led some to overstate the capabilities of the human pattern recognizer. W e do not recognize patterns with equal ease under all the rigid transformations. Recognition thresholds for letters presented upside down are measurably higher than for upright letters, though some discussions of human pattern recognition imply that it is invariant under rotation. The other problem is that the human being recognizes familiar characters so rapidly and so accurately under most conditions that it is difficult to obtain stable performance estimates which are sensitive to experimental manipulation. Tachistoscopic exposure has been the favored method for keeping observers from making close to 100% correct identifications from a known and familiar set of characters, but this is an awkward technique for obtaining a legibility measure on a new type font, for example. Other techniques have involved performing some transformation on the characters repeatedly until probability of correct recognition reaches some lower limit. The characters can be made smaller and smaller, for example, the technique used in the ordinary Snellen chart to measure visual acuity. Studies concerned primarily with transmission of visual information over communication