Abstract

Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.

Highlights

  • From an information processing standpoint, engaging in a ‘‘routine’’ conversation is rather complex; to understand a speaker’s intentions listeners must carefully attend to and decipher cues encountered in different sensory modalities and in several communication channels at once

  • Behavioral performance on the immediate recall task The overall percentage of faces correctly recalled from the preceding array was high overall (M = 90.3%69.2)

  • Eye gaze measures The different eye-tracking measures are reported in Tables 2 and 3 for each emotional face expression when accompanied by each type of emotional prosody or without prosody

Read more

Summary

Introduction

From an information processing standpoint, engaging in a ‘‘routine’’ conversation is rather complex; to understand a speaker’s intentions listeners must carefully attend to and decipher cues encountered in different sensory modalities (vision, audition) and in several communication channels at once. In terms of the channels involved, listeners analyze the linguistic content of speech while interpreting the relational significance of vocal inflections in speech (i.e., speech prosody) and other extralinguistic cues such as facial expressions and body movements. Given these different sources of social information that must be compared and integrated in some manner during interpersonal events, it is not surprising that cues presented in one modality/ channel typically interact with cues presented in another modality/channel [1,2,3]. The goal of this study was to test the idea that meanings conveyed by emotional prosody systematically influence how listeners visually attend to facial expressions, as inferred from on-line measures of their eye fixation patterns using eye-tracking methodology

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call