Abstract

Hearing aids and other listening devices perform poorly in noisy, reverberant venues like restaurants and conference centers with numerous active sound sources. Microphone arrays can use array processing techniques like beamforming to isolate talkers from a specific region in the room while attenuating undesired sound sources. However, beamforming often removes spatial cues and is typically restricted to isolating a single talker at a time. Previous work has shown the effectiveness of remote microphones worn by talkers and adapting the signal at the earpiece to improve the intelligibility of group conversations. Due to the increase in hybrid meetings and classrooms, many spaces are equipped with high throughput, low latency devices including large microphone arrays. In this work, we present a system that aggregates information collected by microphone arrays distributed in a room to enhance the intelligibility of talkers in a group conversation. The beamformed signal from the microphone arrays is adapted to match the magnitude and phase of the earpiece microphones. The filters are continuously updated in order to track motion of both the listeners and talkers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call