Abstract

Motor imagery (MI) is a mental process widely utilized as the experimental paradigm for brain-computer interfaces (BCIs) across a broad range of basic science and clinical studies. However, decoding intentions from MI remains challenging due to the inherent complexity of brain patterns relative to the small sample size available for machine learning. This paper proposes an end-to-end Filter-Bank Multiscale Convolutional Neural Network (FBMSNet) for MI classification. A filter bank is first employed to derive a multiview spectral representation of the EEG data. Mixed depthwise convolution is then applied to extract temporal features at multiple scales, followed by spatial filtering to mitigate volume conduction. Finally, with the joint supervision of cross-entropy and center loss, FBMSNet obtains features that maximize interclass dispersion and intraclass compactness. We compare FBMSNet with several state-of-the-art EEG decoding methods on two MI datasets: the BCI Competition IV 2a dataset and the OpenBMI dataset. FBMSNet significantly outperforms the benchmark methods by achieving 79.17% and 70.05% for four-class and two-class hold-out classification accuracy, respectively. These results demonstrate the efficacy of FBMSNet in improving EEG decoding performance toward more robust BCI applications. The FBMSNet source code is available at https://github.com/Want2Vanish/FBMSNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call