Abstract
Electroencephalography (EEG) is a non-invasive method used to record the brain’s electrical activity, widely employed in brain-computer interface (BCI) applications for decoding motor imagery (MI) signals. Traditional models combining Convolutional Neural Networks (CNNs) and Transformers for decoding Motor Imagery Electroencephalography (MI-EEG) signals often struggle to capture the crucial interrelationships between local and global features effectively, resulting in suboptimal performance. To address this issue, we propose a novel computationally efficient model that integrates CNNs with a novel attention mechanism within the Transformer architecture. This approach effectively captures both local and global dependencies, thereby enhancing feature extraction and decoding accuracy. Evaluations against state-of-the-art models (EEGNet, Deep ConvNet, IFNet, Conformer) on three public datasets (BCIC-IV-2a, BCIC-IV-2b, HGD) reveal substantial performance enhancements. Specifically, our model achieves improvements in accuracy of 9.78%, 11.05%, 4.81%, and 3.75% over these models on the BCIC-IV-2a dataset. Ablation studies and visualization techniques (t-SNE, Grad-CAM) further corroborate the effectiveness and interpretability of our model, highlighting its superior decoding capabilities. The code is available at https://github.com/Whit3Zhao/TMSA-Net.git.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.