Abstract

Aims: We are able to express emotions via suprasegmental features (emotional prosody). It is still unclear whether cochlear implant (CI) users can effectively identify emotional prosody. The present EEG-study investigates the ability of CI users to recognize vocally expressed emotions. Two CI speech coding strategies, ACE (Advance Combination Encoder) and the newly developed MP3000, also known as PACE (Psychoacoustic Advanced Combination Encoder) are being compared. Methods: We investigated two groups of participants: 20 unilateral CI users (age range: 25–55 years) and age matched normal-hearing controls. We presented fifty German sentences with three target emotions: happy, angry, and neutral. The semantic contents of all sentences were neutral. Emotional prosody recognition was assessed using behavioral responses and EEG across both experimental groups. CI users were tested for ACE compared to MP3000 speech coding strategy; whereas control participants were tested with original stimuli compared to generated simulations of both strategies. Results and Discussion: Behavioral data show that normal-hearing listeners achieved near-perfect performance with original stimuli compared to the simulations at p=.038. CI users were able to correctly recognize emotions better with MP3000 strategy compared to ACE (p<0.05). A comparable trend was observed in normal hearing individuals for simulations of the speech coding strategies. The P200 amplitude was analyzed as a marker of emotional prosody differentiation. Significantly higher amplitude of P200 peak was observed after sentence onset in all subject groups reflecting rapid emotion prosody differentiation. Further the amplitude of happy prosody was significantly more positive for PACE strategy use compared to ACE strategy (p=0.03). Conclusions: CI users are able to differentiate gross vocal emotions. MP3000 appears to be a better speech coding strategy compared to ACE for emotional prosody perception. Literatur: B. Swanson, H. Mauch: “Nucleus MATLAB Toolbox 4.20, Software User Manual”, Cochlear Limited Paulmann S, Kotz SA (2008) Early emotional prosody perception based on different speaker voices. Neuroreport 19: 209.213

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call