Abstract

The limited number of brain-computer interface based on motor imagery (MI-BCI) instruction sets for different movements of single limbs makes it difficult to meet practical application requirements. Therefore, designing a single-limb, multi-category motor imagery (MI) paradigm and effectively decoding it is one of the important research directions in the future development of MI-BCI. Furthermore, one of the major challenges in MI-BCI is the difficulty of classifying brain activity across different individuals. In this article, the transfer data learning network (TDLNet) is proposed to achieve the cross-subject intention recognition for multiclass upper limb motor imagery. In TDLNet, the Transfer Data Module (TDM) is used to process cross-subject electroencephalogram (EEG) signals in groups and then fuse cross-subject channel features through two one-dimensional convolutions. The Residual Attention Mechanism Module (RAMM) assigns weights to each EEG signal channel and dynamically focuses on the EEG signal channels most relevant to a specific task. Additionally, a feature visualization algorithm based on occlusion signal frequency is proposed to qualitatively analyze the proposed TDLNet. The experimental results show that TDLNet achieves the best classification results on two datasets compared to CNN-based reference methods and transfer learning method. In the 6-class scenario, TDLNet obtained an accuracy of 65%±0.05 on the UML6 dataset and 63%±0.06 on the GRAZ dataset. The visualization results demonstrate that the proposed framework can produce distinct classifier patterns for multiple categories of upper limb motor imagery through signals of different frequencies. The ULM6 dataset is available at https://dx.doi.org/10.21227/8qw6-f578.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call