Abstract
Brain-computer interface (BCI) is an innovative technology that utilizes artificial intelligence (AI) and wearable electroencephalography (EEG) sensors to decode brain signals and enhance the quality of life. EEG-based motor imagery (MI) brain signal is used in many BCI applications including smart healthcare, smart homes, and robotics control. However, the restricted ability to decode brain signals is a major factor preventing BCI technology from expanding significantly. In this study, we introduce a dynamic attention temporal convolutional network (D-ATCNet) for decoding EEG-based motor imagery signals. The D-ATCNet model uses dynamic convolution and multilevel attention to enhance the performance of MI classification with a relatively small number of parameters. D-ATCNet has two main blocks: dynamic and temporal convolution. Dynamic convolution uses multilevel attention to encode low-level MI-EEG information and temporal convolution uses shifted window with self-attention to extract high-level temporal information from the encoded signal. The proposed model performs better than the existing methods with an accuracy of 71.3% for subject-independent and 87.08% for subject-dependent using the BCI Competition IV-2a dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.