Abstract
Sound processing in the human auditory system is complex and highly non-linear, whereas hearing aids (HAs) still rely on simplified descriptions of auditory processing or hearing loss to restore hearing. Even though standard HA amplification strategies succeed in restoring audibility of faint sounds, they still fall short of providing targeted treatments for complex sensorineural deficits and adverse listening conditions. These shortcomings of current HA devices demonstrate the need for advanced hearing-loss compensation strategies that can effectively leverage the non-linear character of the auditory system. Here, we propose a differentiable deep-neural-network (DNN) framework that can be used to train DNN-based HA models based on biophysical auditory-processing differences between normal-hearing and hearing-impaired systems. We investigate different loss functions to accurately compensate for impairments that include outer-hair-cell (OHC) loss and cochlear synaptopathy (CS), and evaluate the benefits of our trained DNN-based HA models for speech processing in quiet and in noise. Our results show that auditory-processing enhancement was possible for all considered hearing-loss cases, with OHC loss proving easier to compensate than CS. Several objective metrics were considered to estimate the expected speech intelligibility after processing, and these simulations hold promise in yielding improved understanding of speech-in-noise for hearing-impaired listeners who use our DNN-based HA processing. Since our framework can be tuned to the hearing-loss profiles of individual listeners, we enter an era where truly individualised and DNN-based hearing-restoration strategies can be developed and be tested experimentally.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE/ACM Transactions on Audio, Speech, and Language Processing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.