Abstract

State-of-the-art hearing prostheses are equipped with acoustic noise reduction algorithms to improve speech intelligibility. However, cocktail party scenarios with multiple speakers pose a major challenge since it is difficult for the algorithm to determine which speaker it should enhance. To address this problem, electroencephalography (EEG) signals can be used to perform auditory attention detection (AAD), i.e., to detect which speaker the listener is attending to. Taking a step further towards realization of a neuro-steered hearing prosthesis, we worked on AAD-assisted noise suppression in a competing-speakers scenario in the presence of babble noise. We use an EEG-informed AAD module in combination with a blind source separation algorithm to extract the per-speaker envelopes from the microphone recordings, as well as a multi-channel Wiener filter to extract the denoised speech signal(s). Using a new algorithm pipeline, we obtain better AAD accuracies, and a better robustness to variations in speaker positions and signal-to-noise ratios (SNR), compared to previously reported results. Furthermore, the algorithm allows to switch more swiftly to the other speaker’s stream when there is a switch in attention.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.