Abstract

In a remote microphone (RM) system, a talker speaks into a microphone and the signal is transmitted to the hearing aids worn by the hearing-impaired listener. A difficulty with remote microphones, however, is that the signal received at the hearing aid bypasses the head and pinna, so the acoustic cues needed to externalize the sound source are missing. The objective of this paper is to process the RM signal to improve externalization when listening through earphones. The processing is based on a structural binaural model, which uses a cascade of processing modules to simulate the interaural level difference, interaural time difference, pinna reflections, ear-canal resonance, and early room reflections. The externalization results for the structural binaural model are compared to a left-right signal blend, the listener's own anechoic head-related impulse response (HRIR), and the listener's own HRIR with room reverberation. The azimuth is varied from straight ahead to 90° to one side. The results show that the structural binaural model is as effective as the listener's own HRIR plus reverberation in producing an externalized acoustic image, and that there is no significant difference in externalization between hearing-impaired and normal-hearing listeners.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call