AbstractIn recent years, motor imagery (MI) based on brain‐computer interface (BCI) has gained the attention of researchers and been widely used in many fields, such as medical rehabilitation and entertainment. The temporal and spatial features of electroencephalography (EEG) are very important for MI‐BCI classification. This study proposed a better spatiotemporal feature extraction method by combining the common spatial pattern (CSP) with a hybrid spatiotemporal attention convolutional neural network (HSTA‐Net) model. The method first intercepts the MI‐EEG into multiple time windows and projects them into the feature space and feeds into the HSTA‐Net in parallel. Subsequently, the scaled dot product attention module and multi‐head attention module are used to extract spatial feature attention. Finally, the features in multiple time windows obtained through HSTA‐Net are integrated and used for classification. The proposed method achieves a classification accuracy of 77.3% on the BCI Competition IV2a dataset. The experimental results show that the proposed model has better classification performance. The CSP‐HSTA‐Net model can quickly adapt to the distribution of MI‐EEG data during the training process, which increases the interclass distance of the extracted features and decreases its intraclass distance; it can better accomplish the MI‐BCI classification task.
Read full abstract