Abstract

Motor imagery (MI) is a brain-computer interface (BCI) technique in which specific brain regions are activated when people imagine their limbs (or muscles) moving, even without actual movement. The technology converts electroencephalogram (EEG) signals generated by the brain into computer-readable commands by measuring neural activity. Classification of motor imagery is one of the tasks in BCI. Researchers have done a lot of work on motor imagery classification, and the existing literature has relatively mature decoding methods for two-class motor tasks. However, as the categories of EEG-based motor imagery tasks increase, further exploration is needed for decoding research on four-class motor imagery tasks. In this study, we designed a hybrid neural network that combines spatiotemporal convolution and attention mechanisms. Specifically, the data is first processed by spatiotemporal convolution to extract features and then processed by a Multi-branch Convolution block. Finally, the processed data is input into the encoder layer of the Transformer for a self-attention calculation to obtain the classification results. Our approach was tested on the well-known MI datasets BCI Competition IV 2a and 2b, and the results show that the 2a dataset has a global average classification accuracy of 83.3% and a kappa value of 0.78. Experimental results show that the proposed method outperforms most of the existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call