BackgroundIn recent times, the expeditious expansion of Brain-Computer Interface (BCI) technology in neuroscience, which relies on electroencephalogram (EEG) signals associated with motor imagery, has yielded outcomes that rival conventional approaches, notably due to the triumph of deep learning. Nevertheless, the task of developing and training a comprehensive network to extract the underlying characteristics of motor imagining EEG data continues to pose challenges. New methodThis paper presents a multi-scale spatiotemporal self-attention (SA) network model that relies on an attention mechanism. This model aims to classify motor imagination EEG signals into four classes (left hand, right hand, foot, tongue/rest) by considering the temporal and spatial properties of EEG. It is employed to autonomously allocate greater weights to channels linked to motor activity and lesser weights to channels not related to movement, thus choosing the most suitable channels. Neuron utilises parallel multi-scale Temporal Convolutional Network (TCN) layers to extract feature information in the temporal domain at various scales, effectively eliminating temporal domain noise. ResultsThe suggested model achieves accuracies of 79.26%, 85.90%, and 96.96% on the BCI competition datasets IV-2a, IV-2b, and HGD, respectively. Comparison with existing methodsIn terms of single-subject classification accuracy, this strategy demonstrates superior performance compared to existing methods. ConclusionThe results indicate that the proposed strategy exhibits favourable performance, resilience, and transfer learning capabilities.
Read full abstract