Abstract

Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.

Highlights

  • The emotional signal is crossmodal in nature as emotions are conveyed for example by affective prosody and facial expressions [1,2].In particular, redundant/congruent emotional information coming from faces and voices has been found to facilitate emotion recognition, while non-redundant/incongruent emotional information (e.g., a “happy” face and a “sad” voice) impairs judgments about the emotion expressed in one modality [1,3,4], even when response biases were controlled [5]

  • Research in individuals with congenital dense cataract, who recovered sight at different stages in early development, has demonstrated the crucial role of visual experience during the first months of life for the functional development ofsensory processing [14, 34]. In light of this evidence, it might be assumed that congenitally deaf (CD) cochlear implant (CI) users recover less than CI users with early or late deafness onset. These two open questions were addressed with the present study: We investigated the performance of adolescent and adult CD, early deaf (ED), and LD CI users, age- and gender-matched with three groups of typically hearing control participants, in an emotion recognition task with auditory, visual, and audio-visual emotionally congruent as well as emotionally incongruent non-sense speech stimuli

  • CD CI users and controls for CD CI users: The analysis of variance (ANOVA) displayed a main effect of Group (F(1, 12) = 24.21, p < .001), indicating that the CD CI users performed overall less efficiently than their matched controls (CD CI users: mean = 5227.49 ms, SD = 3276.44 ms; controls: mean = 2223.52 ms, SD = 555.22 ms)

Read more

Summary

Introduction

The emotional signal is crossmodal in nature as emotions are conveyed for example by affective prosody and facial expressions [1,2].In particular, redundant/congruent emotional information coming from faces and voices has been found to facilitate emotion recognition, while non-redundant/incongruent emotional information (e.g., a “happy” face and a “sad” voice) impairs judgments about the emotion expressed in one modality [1,3,4], even when response biases were controlled [5]. The finding that individuals cannot inhibit the processing of emotional input of a task-irrelevant modality has been taken as evidence for automatic interactions of crossmodal emotional signals (e.g., [4,5,6]) In support of this notion, developmental studies have shown that by the age of 5–7 months, infants are able to detect common emotions across sensory modalities when presented with unfamiliar face-voice pairings [7,8,9]. This result is further corroborated by electrophysiological data collected in 7-months olds: The event-related potentials (ERPs) measured after presenting face-voice pairings differed for emotionally congruent and incongruent crossmodal stimuli [10]. These results suggest that multisensory interactions of emotional signals emerge early in development

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.