Abstract
IntroductionA brain-computer interface (BCI) is an emerging technology that aims to establish a direct communication pathway between the human brain and external devices. Motor imagery electroencephalography (MI-EEG) signals are analyzed to infer users’ intentions during motor imagery. These signals hold potential for applications in rehabilitation training and device control. However, the classification accuracy of MI-EEG signals remains a key challenge for the development of BCI technology.MethodsThis paper proposes a composite improved attention convolutional network (CIACNet) for MI-EEG signals classification. CIACNet utilizes a dual-branch convolutional neural network (CNN) to extract rich temporal features, an improved convolutional block attention module (CBAM) to enhance feature extraction, temporal convolutional network (TCN) to capture advanced temporal features, and multi-level feature concatenation for more comprehensive feature representation.ResultsThe CIACNet model performs well on both the BCI IV-2a and BCI IV-2b datasets, achieving accuracies of 85.15 and 90.05%, respectively, with a kappa score of 0.80 on both datasets. These results indicate that the CIACNet model’s classification performance exceeds that of four other comparative models.ConclusionExperimental results demonstrate that the proposed CIACNet model has strong classification capabilities and low time cost. Removing one or more blocks results in a decline in the overall performance of the model, indicating that each block within the model makes a significant contribution to its overall effectiveness. These results demonstrate the ability of the CIACNet model to reduce time costs and improve performance in motor imagery brain-computer interface (MI-BCI) systems, while also highlighting its practical applicability.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have