Abstract

Pitch-dominant information is reduced in the spectrally-impoverished signal transmitted by cochlear implants (CIs), leading to potential difficulties in perceiving voice emotion. However, this evidence comes from non-tonal languages such as English, in which pitch information is not required for lexical meaning. In order to better understand how hearing impaired (HI) speakers of a tone language with cochlear implants (CIs) process emotional prosody, an experiment was conducted with healthy normal hearing (NH) Mandarin-speaking adults listening to synthetic stimuli designed to resemble CI input. Listeners heard short sentences from a read-speech database produced by professional actors. Stimuli were selected to express four emotions (“angry,” “happy,” “sad,” and “neutral”), under four conditions which varied the lexical tones of Mandarin. Listeners heard natural speech and three noise-vocoded speech conditions (4-, 8-, and 16-spectral channels) and made a four-alternative, forced-choice decision about the basic emotion underlying each sentence. Preliminary results indicate more accurate emotional prosody recognition for natural speech than for synthesized speech, with greater accuracy for higher channel stimuli than lower channel stimuli. The findings also suggest NH Mandarin-speaking listeners show lower overall vocal emotional prosody accuracy compared with previous studies of non-tonal languages (e.g., English).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call