Abstract

With advances in wireless technology and sensor miniaturization, more and more non-audio sensors become available to and are being integrated into hearing instruments. These sensors help not only improve speech understanding and sound quality, enhance hearing usability and expand the hearing instruments' capabilities to health and wellness monitoring. However, the introduction of these sensors also present a new set of challenges to researchers and engineers. Compared with traditional audio sensors for hearing instruments, these new sensor inputs can come from different modalities and often have different scales and sampling frequencies. In some cases, they are not linear or synchronized to each other. In this presentation, we will review these challenges in details in the context of hearing instruments applications. Furthermore, we will demonstrate how multimodal signal processing and machine learning can be used to overcome these challenges and bring a greater degree of satisfactions to the end users. Finally, future directions in multimodal signal processing and machine learning research for hearing instruments will be discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call