Abstract

Hearing-impaired listeners have a reduced ability to selectively attend to sounds of interest amid distracting sounds in everyday environments. This ability is not fully regained with modern hearing technology. A better understanding of the brain mechanisms underlying selective attention during speech processing may lead to brain-controlled hearing aids with improved detection and amplification of the attended speech. Prior work has shown that brain responses to speech, measured with magnetoencephalography (MEG) or electroencephalography (EEG), are modulated by selective attention. These responses can be predicted from the speech signal through linear filters called Temporal Response Functions (TRFs). Unfortunately, these sensor-level predictions are often noisy and do not provide much insight into specific brain source locations. Therefore, a novel method called Neuro-Current Response Functions (NCRFs) was recently introduced to directly estimate linear filters at the brain source level from MEG responses to speech from one talker. However, MEG is not well-suited for wearable and realtime hearing technologies. This work aims to adapt the NCRF method for EEG under more realistic listening environments. EEG data was recorded from a hearing-impaired listener while attending to one of two competing talkers embedded in 16-talker babble noise. Preliminary results indicate that source-localized linear filters can be directly estimated from EEG data in such competing-talker scenarios. Future work will focus on evaluating the current method on a larger dataset and on developing novel methods, which may aid in the improvement of next-generation brain-controlled hearing technology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call