Abstract

Speech intelligibility in cochlear implant (CI) users degrades considerably in listening environments with reverberation and noise. Previous research in automatic speech recognition (ASR) has shown that phoneme-based speech enhancement algorithms improve ASR system performance in reverberant environments as compared to a global model. However, phoneme-specific speech processing has not yet been implemented in CIs. In this paper, we propose a causal deep learning framework for classifying phonemes using features extracted at the time-frequency resolution of a CI processor. We trained and tested long short-term memory networks to classify phonemes and manner of articulation in anechoic and reverberant conditions. The results showed that CI-inspired features provide slightly higher levels of performance than traditional ASR features. To the best of our knowledge, this study is the first to provide a classification framework with the potential to categorize phonetic units in real-time in a CI.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.