Motor imagery (MI)-based brain-computer interface (BCI) provides a promising solution for the limb rehabilitation of stroke patients. For better rehabilitation performance, high-precision classification of MI-related EEG signals plays a critical role. However, this is still a challenging problem for multi-category MI signals. In this paper, we focus on four commonly used stroke rehabilitation actions, and propose a modular temporal-spatial attention-based CNN (MTSACNN) model for MI classification. In detail, we carry out the MI experiments and acquire the EEG signals related to imagining left/right fist clenching and left/right wrist dorsiflexion. MTSACNN model firstly extracts the low-order MI features through the temporal-spatial feature extraction module (TSFE module). Especially, a group attention mechanism is proposed for intra-group information interaction. Secondly, considering the short- and long-term working characteristics of brain, high-order temporal features are further extracted and fused by the multi-level feature fusion module (MLFF module). Finally, four auxiliary losses are arranged in the classification module (C module) to speed up the model optimization process. The experimental results show that MTSACNN model can achieve good performance in decoding rehabilitation-related MI brain intentions, achieving an average classification accuracy of 72.05% for fourteen subjects. This work is beneficial to promote the construction of high-performance stroke rehabilitation BCI system.
Read full abstract