Abstract

Bimodal hearing, which combines a cochlear implant (CI) with a contralateral hearing aid, provides significant speech recognition benefits relative to a monaural CI. Factors predicting bimodal benefit remain poorly understood but may involve extracting fundamental frequency and/or formant information from the non-implanted ear. This study investigated whether neural responses (frequency following responses, FFRs) to simulated bimodal signals can be (1) accurately classified using machine learning and (2) used to predict perceptual bimodal benefit. We hypothesized that FFR classification accuracy would improve with increasing acoustic bandwidth due to greater fundamental and formant frequency access. Three vowels (/e/, /i/, and /ʊ/) with identical fundamental frequencies were manipulated to create five bimodal simulations (vocoder in right ear, lowpass filtered in left ear): Vocoder-only, Vocoder +125 Hz, Vocoder +250 Hz, Vocoder +500 Hz, and Vocoder +750 Hz. Perceptual performance on the BKB-SIN test was also measured using the same five configurations. FFR classification accuracy improved with increasing bimodal acoustic bandwidth. Furthermore, FFR bimodal benefit predicted behavioral bimodal benefit. These results indicate that the FFR may be useful in objectively verifying and tuning bimodal configurations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call