Abstract

In bimodal listening, cochlear implant (CI) users combine electric hearing (EH) in one ear and acoustic hearing (AH) in the other ear. In electric-acoustic stimulation (EAS), CI users combine EH and AH in the same ear. In quiet, integration of EH and AH has been shown to be better with EAS, but with greater sensitivity to tonotopic mismatch in EH. The goal of the present study was to evaluate how external noise might affect integration of AH and EH within or across ears. Recognition of monosyllabic words was measured for normal-hearing subjects listening to simulations of unimodal (AH or EH alone), EAS, and bimodal listening in quiet and in speech-shaped steady noise (10 dB, 0 dB signal-to-noise ratio). The input/output frequency range for AH was 0.1-0.6 kHz. EH was simulated using an 8-channel noise vocoder. The output frequency range was 1.2-8.0 kHz to simulate a shallow insertion depth. The input frequency range was either matched (1.2-8.0 kHz) or mismatched (0.6-8.0 kHz) to the output frequency range; the mismatched input range maximized the amount of speech information, while the matched input resulted in some speech information loss. In quiet, tonotopic mismatch differently affected EAS and bimodal performance. In noise, EAS and bimodal performance was similarly affected by tonotopic mismatch. The data suggest that tonotopic mismatch may differently affect integration of EH and AH in quiet and in noise.

Highlights

  • Despite considerable efforts over the last 30 years, advances in cochlear implant (CI) technology and signal processing have yet to show substantial gains in speech performance

  • As such, adding residual acoustic hearing (AH) benefitted bimodal and electric-acoustic stimulation (EAS) listening in noise whether or not there was a tonotopic mismatch in electric hearing (EH)

  • The present results suggest that minimizing tonotopic mismatch for EH may increase the benefit of AEH, especially for speech in noise, where both EAS and bimodal hearing were highly sensitive to tonotopic mismatch

Read more

Summary

Introduction

Despite considerable efforts over the last 30 years, advances in cochlear implant (CI) technology and signal processing have yet to show substantial gains in speech performance. Deeply inserted electrodes, and current focusing all offer theoretical advantages over previous technology, none have shown consistent advantages for speech perception [1,2,3]. The poor functional spectral resolution and limited temporal information provided by CIs continues to limit the perception of speech in noise, speech prosody, vocal emotion, tonal language, and music [4,5,6,7,8,9,10]. One of the greatest improvements in CI outcomes has come from combining electric hearing (EH) from the CI with acoustic hearing (AH) in the same ear (electric-acoustic stimulation, or EAS) or in opposite ears (bimodal listening).

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call