This paper presents an extension and refinement of the author's theory for human visual information processing, which is then applied to the problem of human facial recognition. Several fundamental processes are implicated: encoding of visual images into neural patterns, detection of simple facial features, size standardization, reduction of the neural patterns in dimensionality, and finally correlation of the resulting sequence of patterns with all visual patterns already stored in memory. In the theory presented here, this entire process is automatically driven by the storage system in what amounts to an hypothesis verification paradigm. Neural networks for carrying out these processes are presented and syndromes resulting from damage to the proposed system are analyzed. A correspondence between system component and brain anatomy is suggested, with particular emphasis on the role of the primary visual cortex in this process. The correspondence is supported by structural and electrophysiological properties of the primary visual cortex and other related structures. The logical (computational) role suggested for the primary visual cortex has several components: size standardization, size reduction, and object extraction. The result of processing by the primary visual cortex, it is suggested, is a neural encoding of the visual pattern at a size suitable for storage. (In this context, object extraction is the isolation of regions in the visual field having the same color, texture, or spatial extent.) It is shown in detail how the topology of the mapping from retina to cortex, the connections between retina, lateral geniculate bodies and primary visual cortex, and the local structure of the cortex itself may combine to encode the visual patterns. Aspects of this theory are illustrated graphically with human faces as the primary stimulus. However, the theory is not limited to facial recognition but pertains to Gestalt recognition of any class of familiar objects or scenes.