Recently, electroencephalogram (EEG) based on motor imagery (MI) have gained significant traction in brain-computer interface (BCI) technology, particularly for the rehabilitation of paralyzed patients. But the low signal-to-noise ratio of MI EEG makes it difficult to decode effectively and hinders the development of BCI. In this paper, a method of attention-based multiscale EEGNet (AMEEGNet) was proposed to improve the decoding performance of MI-EEG. First, three parallel EEGNets with fusion transmission method were employed to extract the high-quality temporal-spatial feature of EEG data from multiple scales. Then, the efficient channel attention (ECA) module enhances the acquisition of more discriminative spatial features through a lightweight approach that weights critical channels. The experimental results demonstrated that the proposed model achieves decoding accuracies of 81.17, 89.83, and 95.49% on BCI-2a, 2b and HGD datasets. The results show that the proposed AMEEGNet effectively decodes temporal-spatial features, providing a novel perspective on MI-EEG decoding and advancing future BCI applications.
Read full abstract