Abstract

Motor Imagery (MI) classification with electroencephalography (EEG) is a critical aspect of Brain–Computer Interface (BCI) systems, enabling individuals with mobility limitations to communicate with the outside world. However, the complexity, variability, and low signal-to-noise ratio of EEG data present significant challenges in decoding these signals, particularly in a subject-independent manner. To overcome these challenges, we propose a transformer-based approach that employs a self-attention process to extract features in the temporal and spatial domains. To establish spatial correlations across MI EEG channels, the self-attention module periodically updates each channel by averaging its features across all channels. This weighted averaging improves classification accuracy and removes artifacts generated by manually selecting channels. Furthermore, the temporal self-attention mechanism encodes global sequential information into the features for each sample time step, allowing for the extraction of superior temporal properties in the time domain from MI EEG data. The effectiveness of the proposed strategy has been confirmed through testing against the BCI Competition IV 2a and 2b benchmarks. Overall, our proposed model outperforms state-of-the-art methods and demonstrates greater stability in both subject-dependent and subject-independent strategies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.