Abstract

We encounter and process information from multiple sensory modalities in our daily lives, and research suggests that learning can be more efficient when contexts are multisensory. In this study, we were interested in whether face identity recognition memory might be improved in multisensory learning conditions, and to explore associated changes in pupil dilation during encoding and recognition. In two studies participants completed old/new face recognition tasks wherein visual face stimuli were presented in the context of sounds. Faces were learnt alongside no sound, low arousal sounds (Experiment 1), high arousal non-face relevant, or high arousal face relevant (Experiment 2) sounds. We predicted that the presence of sounds during encoding would improve later recognition accuracy, however, the results did not support this with no effect of sound condition on memory. Pupil dilation, however, was found to predict later successful recognition both at encoding and during recognition. While these results do not provide support to the notion that face learning is improved under multisensory conditions relative to unisensory conditions, they do suggest that pupillometry may be a useful tool to further explore face identity learning and recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call