A model of face representation, inspired by known biology of the visual system, is compared to experimental data on the perception of facial similarity. The face representation model uses aggregate primary visual cortex (V1) cell responses topographically linked to a grid covering the face, allowing comparison of shape and texture at corresponding points in two facial images. When a set of relatively similar faces was used as stimuli, this “linked aggregate code” (LAC) predicted human performance in similarity judgment experiments. When faces of different categories were used, natural facial dimensions such as sex and race emerged from the LAC model without training. The dimensional structure of the LAC similarity measure for the mixed-category task displayed some psychologically plausible features, but also highlighted shortcomings of the proposed representation. The results suggest that the LAC based similarity measure may be useful as an interesting starting point for further modeling studies of face representation in higher visual areas.
Read full abstract