Abstract

Hearing prostheses have built-in algorithms to perform acoustic noise reduction and improve speech intelligibility. However, in a multi-speaker scenario the noise reduction algorithm has to determine which speaker the listener is focusing on, in order to enhance it while suppressing the other interfering sources. Recently, it has been demonstrated that it is possible to detect auditory attention using electroencephalography (EEG). In this paper, we use multi-channel Wiener filters (MWFs), to filter out each speech stream from the speech mixtures in the micro-phones of a binaural hearing aid, while also reducing background noise. From the demixed and denoised speech streams, we extract envelopes for an EEG-based auditory attention detection (AAD) algorithm. The AAD module can then select the output of the MWF corresponding to the attended speaker. We evaluate our algorithm in a two-speaker scenario in the presence of babble noise and compare it to a previously proposed algorithm. Our algorithm is observed to provide speech envelopes that yield better AAD accuracies, and is more robust to variations in speaker positions and diffuse background noise.

Highlights

  • Signal processing algorithms in hearing aids and cochlear implants allow to suppress background noise for improved speech intelligibility for the hearing impaired

  • We propose an improved algorithm where multiple multi-channel Wiener filters (MWFs) receive different speaker-dependent voice activity information from the speaker envelopes extracted by the blind envelope demixing algorithm

  • We have used the output signal to noise ratios (SNRs) of the N-fold MWFs and accuracy of attention detection as the metrics to assess the performance of our proposed algorithm

Read more

Summary

Introduction

Signal processing algorithms in hearing aids and cochlear implants allow to suppress background noise for improved speech intelligibility for the hearing impaired. Adaptive beamformers are powerful, as they allow to change and optimize their beam pattern to the acoustic scenario [1], [2]. Incorporating a brain-computer interface to infer the auditory attention of the listener opens up an interesting field of research aiming to build smarter hearing prostheses [3]. Various recent studies have demonstrated that it is possible to perform auditory attention detection (AAD) based on neural measurements such as EEG [4]–[7], and that differential tracking of the attended and unattended speech streams, necessary for AAD, is present in hearing-impaired listeners [8]. Supported by discreet EEG recording technology [9]–[11], such AAD algorithms could work hand in hand with noise

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.