Deep learning methods, particularly convolutional neural networks (CNNs), have advanced the field of motor imagery (MI) by effectively extracting spatio-spectral–temporal (SST) features from electroencephalography (EEG) signals. Image-based CNNs using time–frequency analysis such as wavelet transform have outperformed signal-based CNNs in MI-EEG classification. However, the spatial characteristics of EEG signals can be neglected due to the high dimensionality of spectral–temporal feature space. Moreover, the performance hinges on whether the predefined transformation can capture the class-discriminative spectral–temporal representations of EEG signals well. Despite attempts to alleviate these challenges by selecting EEG electrodes or restricting spectral–temporal feature space, large inter-subject variabilities have impeded successful MI-EEG decoding. In this paper, we propose a learnable continuous wavelet-based multi-branch attentive CNN framework for decoding MI-EEG signals. Our method is designed to automatically generate class-discriminative SST representations of EEG signals within subject-specific sub-bands by utilizing learnable wavelet-based convolutions. Additionally, it employs multi-branch attentive CNNs for the effective and efficient extraction of local and global SST features. Our comprehensive evaluation on two public datasets demonstrates that the proposed method significantly outperformed state-of-the-art methods. We also conduct an ablation study under various configuration settings within the proposed framework to demonstrate the effectiveness of our method. The proposed method provides a powerful and innovative approach to MI-EEG signal decoding, with implications for personalized EEG decoding.
Read full abstract