Abstract

The recognition task of visual stimuli based on EEG (Electroencephalogram) has become a major and important topic in the field of Brain–Computer Interfaces (BCI) research. Although the underlying spatial features of EEG can effectively represent visual stimulus information, it still remains a highly challenging task to explore the local–global information of the underlying EEG to achieve better decoding performance. Therefore, in this paper we propose a deep learning architecture called Linear-Attention-combined Convolutional Neural Network (LACNN) for visual stimuli EEG-based classification task. The proposed architecture combines the modules of Convolutional Neural Networks (CNN) and Linear Attention, effectively extracting local and global features of EEG for decoding while maintaining low computational complexity and model parameters. We conducted extensive experiments on a public EEG dataset from the Stanford Digital Repository. The experimental results demonstrate that LACNN achieves an average decoding accuracy of 54.13% and 29.83% in 6-category and 72-exemplar classification tasks respectively, outperforming the state-of-the-art methods, which indicates that our method can effectively decode visual stimuli from EEG. Further analysis of LACNN shows that the Linear Attention module improves the separability between different category features and localizes key brain region information that aligns with the paradigm principles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call