Abstract

In the everyday environment, affective information is conveyed by both the face and the voice. Studies have demonstrated that a concurrently presented voice can alter the way that an emotional face expression is perceived, and vice versa, leading to emotional conflict if the information in the two modalities is mismatched. Additionally, evidence suggests that incongruence of emotional valence activates cerebral networks involved in conflict monitoring and resolution. However, it is currently unclear whether this is due to task difficulty—that incongruent stimuli are harder to categorize—or simply to the detection of mismatching information in the two modalities. The aim of the present fMRI study was to examine the neurophysiological correlates of processing incongruent emotional information, independent of task difficulty. Subjects were scanned while judging the emotion of face-voice affective stimuli. Both the face and voice were parametrically morphed between anger and happiness and then paired in all audiovisual combinations, resulting in stimuli each defined by two separate values: the degree of incongruence between the face and voice, and the degree of clarity of the combined face-voice information. Due to the specific morphing procedure utilized, we hypothesized that the clarity value, rather than incongruence value, would better reflect task difficulty. Behavioral data revealed that participants integrated face and voice affective information, and that the clarity, as opposed to incongruence value correlated with categorization difficulty. Cerebrally, incongruence was more associated with activity in the superior temporal region, which emerged after task difficulty had been accounted for. Overall, our results suggest that activation in the superior temporal region in response to incongruent information cannot be explained simply by task difficulty, and may rather be due to detection of mismatching information between the two modalities.

Highlights

  • The recognition and understanding of emotion from the face and voice is a crucial part of social cognition and inter-personal relationships

  • Studies using this approach have emphasized the integrative role of the superior temporal gyrus (STG)/middle temporal gyrus (MTG) as well as the posterior STS, amygdala (Dolan et al, 2001; Ethofer et al, 2006a,b) and insula Ethofer et al (2006a); and regions presumed to be part of the “visual” or “auditory” systems, such as the fusiform gyrus (Kreifelts et al, 2010) and anterior superior temporal gyrus (STG; Robins et al, 2009)

  • Incongruence After the variance associated with clarity values was regressed out, we found a positive effect of incongruence across a wide region of the right STG/STS (Figure 5, Table 1C)

Read more

Summary

Introduction

The recognition and understanding of emotion from the face and voice is a crucial part of social cognition and inter-personal relationships. Regions that respond more to both, or each of the unimodal sources alone are assumed to play a part in integrating information from the two modalities Studies using this approach have emphasized the integrative role of the superior temporal gyrus (STG)/middle temporal gyrus (MTG) as well as the posterior STS (pSTS; Pourtois et al, 2005; Ethofer et al, 2006a; Kreifelts et al, 2007, 2009, 2010; Robins et al, 2009), amygdala (Dolan et al, 2001; Ethofer et al, 2006a,b) and insula Ethofer et al (2006a); and regions presumed to be part of the “visual” or “auditory” systems, such as the fusiform gyrus (Kreifelts et al, 2010) and anterior superior temporal gyrus (STG; Robins et al, 2009)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call