Abstract

People recognize faces of their own race more accurately than faces of other races—a phenomenon known as the “Other-Race Effect” (ORE). Previous studies show that training with multiple variable images improves face recognition. Building on multi-image training, we take a novel approach to improving own- and other-race face recognition by testing the role of learning context on accuracy. Learning context was either contiguous, with multiple images of each identity seen in sequence, or distributed, with multiple images of an identity randomly interspersed among different identities. In two experiments, East Asian and Caucasian participants learned own- and other-races faces either in a contiguous or distributed order. In Experiment 1, people learned each identity from four highly variable face images. In Experiment 2, identities were learned from one image, repeated four times. In both experiments we found a robust other-race effect. The effect of learning context, however, differed depending on the variability of the learned images. The distributed presentation yielded better recognition when people learned from single repeated images (Exp. 1), but not when they learned from multiple variable images (Exp. 2). Overall, performance was better with multiple-image training than repeated single image training. We conclude that multiple-image training and distributed learning can both improve recognition accuracy, but via distinct processes. The former broadens perceptual tolerance for image variation from a face, when there are diverse images available to learn. The latter effectively strengthens the representation of differences among similar faces, when there is only a single learning image.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call