Abstract
Advances in cochlear implant (CI) technology allow for acoustic and electric hearing to be combined within the same ear (electric-acoustic stimulation, or EAS) and/or across ears (bimodal listening). Integration efficiency (IE; the ratio between observed and predicted performance for acoustic-electric hearing) can be used to estimate how well acoustic and electric hearing are combined. The goal of this study was to evaluate factors that affect IE in EAS and bimodal listening. Vowel recognition was measured in normal-hearing subjects listening to simulations of unimodal, EAS, and bimodal listening. The input/output frequency range for acoustic hearing was 0.1–0.6 kHz. For CI simulations, the output frequency range was 1.2–8.0 kHz to simulate a shallow insertion depth and the input frequency range was varied to provide increasing amounts of speech information and tonotopic mismatch. Performance was best when acoustic and electric hearing was combined in the same ear. IE was significantly better for EAS than for bimodal listening; IE was sensitive to tonotopic mismatch for EAS, but not for bimodal listening. These simulation results suggest acoustic and electric hearing may be more effectively and efficiently combined within rather than across ears, and that tonotopic mismatch should be minimized to maximize the benefit of acoustic-electric hearing, especially for EAS.
Highlights
The coarse spectral resolution provided by cochlear implants (CIs) greatly limits performance in challenging listening conditions such as perception of speech in noise, speech prosody, vocal emotion, tonal language, music, etc.[1,2,3,4,5,6,7,8]
The lowest output frequency (1.2 kHz) corresponds to a cochlear location of a 20-mm insertion of an electrode array according to Greenwood[58] and is slightly higher than the median upper edge of residual acoustic hearing for hybrid CI patients reported by Karsten et al.[59]
electric-acoustic stimulation (EAS) performance was significantly better than acoustic hearing (AH) for CI input low-cutoff frequencies ≥0.5 kHz; bimodal performance was significantly better than AH only when the CI input low-cutoff frequencies was 0.8 kHz
Summary
The coarse spectral resolution provided by cochlear implants (CIs) greatly limits performance in challenging listening conditions such as perception of speech in noise, speech prosody, vocal emotion, tonal language, music, etc.[1,2,3,4,5,6,7,8]. While stimulation at the correct tonotopic place is necessary for complex pitch perception[53], other frequency components important for speech, such as vowel first formant (F1, associated with tongue height) and second formant (F2, associated with tongue position within the vocal cavity) may be sensitive to tonotopic mismatch This may be especially true when one component is delivered to the correct place (e.g., F1 with acoustic hearing) and another is delivered to a shifted place (e.g., F1 and/or F2 with electric hearing), resulting in interference between F1 cues and/or distortion to the ratio between F1 and F2 frequencies. NH listeners and simulations were used to explicitly control the extent of stimulation within the cochlea and to directly compare perception of combined acoustic and electric hearing within and across ears Such comparisons cannot be made in real EAS and bimodal CI listeners, as the extent/quality of residual acoustic hearing and the electrode-neural interface (the number and position of intra-cochlear electrodes relative to healthy neurons) is likely to vary across ears and/or patients. We hypothesized that there would be a tradeoff between the amount of speech information in the CI simulation and the degree of tonotopic mismatch
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have