Abstract

BackgroundMotor imagery-based electroencephalogram (EEG) brain-computer interface (BCI) technology has seen tremendous advancements in the past several years. Deep learning has outperformed more traditional approaches, such next-gen neuro-technologies, in terms of productivity. It is still challenging to develop and train an end-to-end network that can sufficiently extract the possible characteristics from EEG data used in motor imaging. Brain-computer interface research is largely reliant on the fundamental problem of accurately classifying EEG data. There are still many challenges in the field of MI classification even after researchers have proposed a variety of methods, such as deep learning and machine learning techniques. MethodologyWe provide a model for four-class categorization of motor imagery EEG signals using attention mechanisms: left hand, right hand, foot, and tongue/rest. The model is built on multi-scale spatiotemporal self-attention networks. To determine the most effective channels, self-attention networks are implemented spatially to assign greater weight to channels associated with motion and lesser weight to channels unrelated to motion. To eliminate noise in the temporal domain, parallel multi-scale Temporal Convolutional Network (TCN) layers are utilized to extract temporal domain features at various scales. ResultOn the IV-2b dataset from the BCI Competition, the suggested model achieved an accuracy of 85.09 %; on the IV-2a and IV-2b datasets from the HGD datasets, it was 96.26 %. Comparison with existing methodsIn single-subject classification, this approach demonstrates superior accuracy when compared to existing methods. ConclusionThe findings suggest that this approach exhibits commendable performance, resilience, and capacity for transfer learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call