Abstract

In the brain-computer interface (BCI), motor imagery (MI) could be defined as the Electroencephalogram (EEG) signals through imagined movements, and ultimately enabling individuals to control external devices. However, the low signal-to-noise ratio, multiple channels and non-linearity are the essential challenges of accurate MI classification. To tackle these issues, we investigate the role of adaptive frequency bands selection and spatial-temporal feature learning in decoding motor imagery. We propose an Adaptive Filter of Frequency Bands based Coordinate Attention Network (AFFB-CAN) to improve the performance of MI classification. Specifically, we design the AFFB to adaptively obtain the upper and lower limits of frequency bands in order to alleviate information loss caused by manual selection. Next, we propose the CAN-based network to emphasize the key brain regions and temporal segments. And we design a multi-scale module to enhance temporal context learning. The conducted experiments on the BCI Competition IV-2a and 2b datasets reveal that our approach achieves an outstanding average accuracy, kappa values, and Macro F1-Score with 0.7825, 0.7104, and 0.7486 respectively. Similarly, for the BCI Competition IV-2b dataset, the average accuracy, kappa values, and F1-Score obtained are 0.8879, 0.7427, and 0.8734, respectively. The proposed AFFB-CAN method improves the performance of MI classification. In addition, our study confirms previous findings that motor imagery is mainly associated with µ and β rhythms. Moreover, we also find that γ rhythms also play an important role in MI classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call