Abstract

Hearing aid processing is designed to improve audibility for sounds of interest, often by targeting external speech signals. During natural conversation, however, the hearing aid user is also the source of speech, potentially interacting with hearing aid function and leading to suboptimal processing. In this study, we investigated how the presence of own voice impacts the deployment of specific features designed to enhance audibility and intelligibility of conversational partners. We recorded real-time hearing aid feature engagement (directional microphones, noise reduction, and speech enhancement) during simulated conversations in quiet and in background noise. Simulations were performed using an acoustic manakin (GRAS 45BC-12) capable of producing speech via internal speaker, which was situated in the center of a 24-speaker spatial array. The results demonstrate how the presence of a hearing aid user’s own voice disrupts the intended use of these adaptive features, and that there is a need for optimizing hearing enhancement devices to account for own voice in dynamic scenarios. Future studies will determine the impact of a hearing aid user’s own voice on the device’s ability to improve speech intelligibility and user satisfaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call