Abstract

This study investigated the utilization of acoustic cues in identifying speech intonation contrasts in adult cochlear implant (CI) recipients and normal‐hearing (NH) individuals listening to spectrally and temporally degraded stimuli. Fundamental frequency, intensity, and duration patterns of a disyllabic word, popcorn, were manipulated orthogonally, resulting in 360 resynthesized stimuli. In a two‐alternative forced‐choice task, CI and NH participants identified whether each stimulus was question‐like or statement‐like. Each NH listener also identified stimuli that were noise vocoded. Preliminary results from seven CI and four NH listeners indicated that (a) weighting of fundamental frequency in the identification of CI listeners was less pronounced than that of NH listeners, (b) unlike NH listeners who showed little reliance upon intensity or duration patterns in identifying unprocessed stimuli, CI users demonstrated systematic dependence upon intensity and duration cues in identifying the same set of stimuli, and (c) with noise‐vocoded (spectrally and temporally degraded) stimuli, NH listeners’ identification exhibited weighting patterns for intensity and duration cues in a way similar to that of CI listeners. Implications for the processing of multidimensional acoustic cues for speech intonation in normal and electrical hearing will be discussed. [Work supported by NIDCD‐R01DC04786.]

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.