Because I am generally in agreement with Fox's argument, let's begin with one of the rare points on which I think he is wrong. The problem arises from an apparently erroneous parallel between the human brain and a computer. Fox's main point is, however, both reinforced and clarified by considering the neuroscience of stereotyping and prejudice. In exploring why humans-unlike Star Trek's Mr. Spock or Kahneman and Tversky's rational decision makerare more comfortable with stereotypical thinking than with logically treating each case on its own merits, Fox says that reliance on prejudice or stereotype is a rough-and-ready process whose routine would be an elaborate critical-path analysis and would indeed be a computer program if we could work the whole thing out. In this view, thinking in stereotypes is logical analysis, at a trillion times the speed. I believe this is incorrect-or at least misleading insofar as these kinds of thinking are qualitatively distinct. Cognitive neuroscience has shown that the human brain has a hierarchically organized, modular structure (Gazzaniga, 1985, 1988). For each sensory modality, functionally specialized processing tasks are localized; distinct neuronal structures and pathways process information in parallel. In addition to being shaped by feedback (conditioning, associative learning, memory, etc.), these neuronal modules often exhibit feedforward characteristics: selective response, expectation, and active search for appropriate stimulation. Many neuronal modules or ensembles have innate response tendencies, though in most cases they can be modulated or changed by experience. The holistic, tabula rasa model of the brain, inherited from Locke and the empiricists, and popularized by the behaviorists, is dead. Whereas Kahneman and Tversky apparently conceptualize thought in terms of the processing algorithms of a serial computer, the findings in neuroscience show that the human brain functions as a complex parallel processing system. Within this system, of course, there is a capacity for abstract, linear computation (usually associated with lefthemispheric functioning, though one should beware of the popularized left brain-right brain dichotomy). Individuals differ in the ease with which they use such linear processing-as is evident in differences in mathematical ability. Indeed, variations in the role played by abstract, linear cognitive processing, as contrasted with other modes of concrete and/or holistic response, probably explain not only individual learning styles but also broader gender differences those emphasized by Carol Gilligan (1982). Thinking in stereotypes is thus probably not like conventional computer programming at all. Instead, it seems to represent a mode of pattern matching that is concrete rather than abstract and holistic rather than linear. This difference has profound implications not only for models of artificial intelligence, but also for our understanding of human nature. Why is it, then, that clarifying Fox's apparent mistake only strengthens his main point?