Abstract

The healthy auditory system can attend to weak sounds within complex acoustic scenes, a skill that degrades with aging and hearing loss. Recent technology such as microphone array processing should alleviate such impairment, but its uptake is limited by the lack of means to steer the processing towards one source among many. Within our auditory brain, efferent pathways put peripheral processing stages under the control of central stages, and ideally we would like such cognitive control to extend to the external device. Recent progress in the field of Brain Computer Interfaces (BCI) and some promising attempts at joint decoding of streams of audio and ECoG, EEG, or MEG suggests that such control might be possible. Is it? What scientific and technological hurdles need to be overcome to produce a “Cognitively Controlled Hearing Aid”? I will speak more specifically about our efforts to determine the reliability of EEG attention decoding in realistic acoustic scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call