Abstract
Humans have the ability to focus on one of the sound sources in a noisy scene, which is critical for everyday communication. Auditory attention detection (AAD) seeks to detect selective attention from one’s brain signals. For AAD to be useful in brain–computer interface applications, new approaches with low computational cost, high classification performance, and low latency are required to be developed. In this study, we proposed a novel neural-inspired architecture to mimic the neural computation and coding strategy in the brain for electroencephalography-based AAD. We validated our model through data visualization, and conducted experiments on two publicly available databases. For both KUL and DTU databases, it outperforms both linear and convolutional neural network (CNN) models with consistent improvements from 1 s to 5 s decision windows in terms of detection accuracy. Although the accuracy of the proposed neural-inspired model is inferior to the state-of-the-art spatio-spectral feature (SSF)-CNN model, the computational cost of our model is less than 1% of SSF-CNN’s. Moreover, the neural-inspired decoder is more hardware friendly and energy-efficient due to its biological computing scheme. Overall, the proposed neural-inspired architecture realizes a fast, accurate, and low energy expenditure AAD, which is a big step forward towards practical neuro-steered hearing aids.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.