Abstract

The perception of emotions is often suggested to be multimodal in nature, and bimodal as compared to unimodal (auditory or visual) presentation of emotional stimuli can lead to superior emotion recognition. In previous studies, contrastive aftereffects in emotion perception caused by perceptual adaptation have been shown for faces and for auditory affective vocalization, when adaptors were of the same modality. By contrast, crossmodal aftereffects in the perception of emotional vocalizations have not been demonstrated yet. In three experiments we investigated the influence of emotional voice as well as dynamic facial video adaptors on the perception of emotion-ambiguous voices morphed on an angry-to-happy continuum. Contrastive aftereffects were found for unimodal (voice) adaptation conditions, in that test voices were perceived as happier after adaptation to angry voices, and vice versa. Bimodal (voice + dynamic face) adaptors tended to elicit larger contrastive aftereffects. Importantly, crossmodal (dynamic face) adaptors also elicited substantial aftereffects in male, but not in female participants. Our results (1) support the idea of contrastive processing of emotions (2), show for the first time crossmodal adaptation effects under certain conditions, consistent with the idea that emotion processing is multimodal in nature, and (3) suggest gender differences in the sensory integration of facial and vocal emotional stimuli.

Highlights

  • The perception of emotional states is crucial for adequate social interaction

  • The prominent main effect of ML, F(4,88) = 162.347, p < . 001, εHF = .561, ηp2 = .881, validated the general morphing procedure, as the proportion of happy responses increased with increasing morph level (Ms = .322 ± .023, .426 ± .030, . 514 ± .028, .612 ± .023, and .694 ± .020, for ML20 to ML80, respectively)

  • Summary of results from the overall ANOVAs on the proportion of “happy-responses” with the factors adaptor emotion (AEmo, 3), test gender (TG, 2), morph level (ML 5), and between subject factor and adaptor gender (AG, 2), as well as summary of results of post-hoc ANOVAs performed to follow-up significant interactions of Experiments 1 and 2

Read more

Summary

Introduction

The perception of emotional states is crucial for adequate social interaction. Emotions are expressed in the face, and in the voice (e.g., [1]), or in gesture (e.g., [2]), and body movement (e.g., [2,3,4]). The majority of empirical studies investigated emotion perception in one modality only, many researchers think that emotions are perceived in a multimodal manner [5]. Evidence supporting this idea includes reports on brain-damaged patients, who showed comparable impairments in processing specific emotions from faces and voices (e.g., [6,7], but see 8). An impressive source of evidence for the perceptual integration of facial movements and speech is the so called McGurk effect [9], which shows that simultaneous presentation of an auditory vocalization with non-matching facial speech can alter the perceived utterance (e.g. the presentation of an auditory /baba/ with a face simultaneously articulating /gaga/ typically leads to a “fused” percept of /dada/). Crossmodal processing is much less well investigated for paralinguistic social signals, including person identity and emotional expression (for a recent overview, see 11)

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.