Abstract

BackgroundMotor imagery (MI) based brain-computer interfaces (BCIs) have promising potentials in the field of neuro-rehabilitation. However, due to individual variations in active brain regions during MI tasks, the challenge of decoding MI EEG signals necessitates improved classification performance for practical application. New methodThis study proposes a self-attention-based Convolutional Neural Network (CNN) in conjunction with a time-frequency common spatial pattern (TFCSP) for enhanced MI classification. Due to the limited availability of training data, a data augmentation strategy is employed to expand the scale of MI EEG datasets. The self-attention-based CNN is trained to automatically extract the temporal and spatial information from EEG signals, allowing the self-attention module to select active channels by calculating EEG channel weights. TFCSP is further implemented to extract multiscale time-frequency-space features from EEG data. Finally, the EEG features derived from TFCSP are concatenated with those from the self-attention-based CNN for MI classification. ResultsThe proposed method is evaluated on two publicly accessible datasets, BCI Competition IV IIa and BCI Competition III IIIa, yielding mean accuracies of 79.28 % and 86.39 %, respectively. ConclusionsCompared with state-of-the-art methods, our approach achieves superior classification results in accuracy. Self-attention-based CNN combining with TFCSP can make full use of the time-frequency-space information of EEG, and enhance the classification performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call