Abstract

BackgroundEmotionally salient information in spoken language can be provided by variations in speech melody (prosody) or by emotional semantics. Emotional prosody is essential to convey feelings through speech. In sensori-neural hearing loss, impaired speech perception can be improved by cochlear implants (CIs). Aim of this study was to investigate the performance of normal-hearing (NH) participants on the perception of emotional prosody with vocoded stimuli. Semantically neutral sentences with emotional (happy, angry and neutral) prosody were used. Sentences were manipulated to simulate two CI speech-coding strategies: the Advance Combination Encoder (ACE) and the newly developed Psychoacoustic Advanced Combination Encoder (PACE). Twenty NH adults were asked to recognize emotional prosody from ACE and PACE simulations. Performance was assessed using behavioral tests and event-related potentials (ERPs).ResultsBehavioral data revealed superior performance with original stimuli compared to the simulations. For simulations, better recognition for happy and angry prosody was observed compared to the neutral. Irrespective of simulated or unsimulated stimulus type, a significantly larger P200 event-related potential was observed for happy prosody after sentence onset than the other two emotions. Further, the amplitude of P200 was significantly more positive for PACE strategy use compared to the ACE strategy.ConclusionsResults suggested P200 peak as an indicator of active differentiation and recognition of emotional prosody. Larger P200 peak amplitude for happy prosody indicated importance of fundamental frequency (F0) cues in prosody processing. Advantage of PACE over ACE highlighted a privileged role of the psychoacoustic masking model in improving prosody perception. Taken together, the study emphasizes on the importance of vocoded simulation to better understand the prosodic cues which CI users may be utilizing.

Highlights

  • Salient information in spoken language can be provided by variations in speech melody or by emotional semantics

  • This study aimed to investigate an early differentiation of vocal emotions in semantically neutral expressions

  • By utilizing behavioral tasks and event-related potentials (ERPs) to investigate neutral, angry, and happy emotion recognition, we demonstrated that performance of normal hearing subjects were significantly better for unsimulated than for cochlear implants (CIs)-simulated prosody recognition

Read more

Summary

Introduction

Salient information in spoken language can be provided by variations in speech melody (prosody) or by emotional semantics. In sensori-neural hearing loss, impaired speech perception can be improved by cochlear implants (CIs). Aim of this study was to investigate the performance of normal-hearing (NH) participants on the perception of emotional prosody with vocoded stimuli. Cochlear implants (CIs) enable otherwise deaf individuals to achieve levels of speech perception that would be unattainable with conventional hearing aids [5,6]. In a CI, speech signals are encoded into electrical pulses to stimulate hearing nerve cells. Algorithms used for such encoding are known as speech-coding strategies. An important possible variability in hearing performance of CI users may reside in the speech-coding strategy used [9]. Simulations that mimic an acoustic signal in a manner consistent with the output of a CI have been proven helpful for comprehending the mechanism of electric hearing [10], as they provide insight into the relative efficacy of different processing algorithms

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call