Abstract

The emergence of deep learning methods has driven the widespread use of brain–machine interface motor imagery classification in machine control and medical rehabilitation, and has achieved classification accuracy superior to those of traditional machine learning methods. However, models trained using current mainstream deep learning methods show a maximum variation in accuracy of over 20% when using data from different subjects in the same dataset for classification. The large variation indicates the weak robustness of such models and the difficulties in feature extraction for some subjects. As motor imagery classification is aimed at individual users, it is not conducive to the diffusion of the technique if the results vary too much from one user to another. In our research, we have found the accuracy differences between different subjects are caused by the data differ in spatial characteristics and training difficulty. Therefore, exploring the differences between different subjects’ data and weakening these differences can reduce the accuracy gap between subjects and ensure that the model can have good classification accuracy for each subject. We call this operation of reducing the accuracy gap individual differences weakening. To implement this operation, we propose a Double-branch Graph Convolutional Attention Neural Network (DGCAN), which uses a graph neural network to filter channels that are less disturbed by spatial location factors, and uses spatial–temporal domain convolution to focus on extracting features contained in the filtered channels, weakening the influence of spatial features contributes to individual differences weakening. We also design a loss function, EegLoss, which focuses on training hard samples and can effectively reduce the model-insensitive data contained in different subjects. We test model performance on the BCI Competition IV datasets 2a and 2b, achieving accuracies of 84% and 86%. We also compare the accuracy gap between subjects, showing that our model is effective in reducing the accuracy gap between subjects and has higher robustness than previous models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call