Abstract

In listening environments with room reverberation and background noise, cochlear implant (CI) users experience substantial difficulties in understanding speech. Because everyday environments have different combinations of reverberation and noise, there is a need to develop algorithms that can mitigate both effects to improve speech intelligibility. Desmond et al. (2014) developed a machine learning approach to mitigate the adverse effects of late reverberant reflections of speech signals by using a classifier to detect and remove affected segments in CI pulse trains. In this study, we investigate the robustness of the reverberation mitigation algorithm in environments with both reverberation and noise. We conducted sentence recognition tests in normal hearing listeners using vocoded speech with unmitigated and mitigated reverberant-only or noisy reverberant speech signals, across different reverberation times and noise types. Improvements in speech intelligibility were observed in mitigated reverberant-only conditions. However, mixed results were obtained in the mitigated noisy reverberant conditions as a reduction in speech intelligibility was observed for noise types whose spectra were similar to that of anechoic speech. Based on these results, the focus of future work will be to develop a context-dependent approach that activates different mitigation strategies for different acoustic environments. [Research supported by NIH grant R01DC014290-03.]

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.