Abstract

Brain–computer interfaces (BCIs) based on motor imagery (MI) enable the disabled to interact with the world through brain signals. To meet demands of real-time, stable, and diverse interactions, it is crucial to develop lightweight networks that can accurately and reliably decode multi-class MI tasks. In this paper, we introduce BrainGridNet, a convolutional neural network (CNN) framework that integrates two intersecting depthwise CNN branches with 3D electroencephalography (EEG) data to decode a five-class MI task. The BrainGridNet attains competitive results in both the time and frequency domains, with superior performance in the frequency domain. As a result, an accuracy of 80.26 percent and a kappa value of 0.753 are achieved by BrainGridNet, surpassing the state-of-the-art (SOTA) model. Additionally, BrainGridNet shows optimal computational efficiency, excels in decoding the most challenging subject, and maintains robust accuracy despite the random loss of 16 electrode signals. Finally, the visualizations demonstrate that BrainGridNet learns discriminative features and identifies critical brain regions and frequency bands corresponding to each MI class. The convergence of BrainGridNet’s strong feature extraction capability, high decoding accuracy, steady decoding efficacy, and low computational costs renders it an appealing choice for facilitating the development of BCIs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call