Abstract

AbstractBrain–computer interfaces (BCI) have begun to revolutionize many aspects of human life, ranging from health to smart living and communication devices. Therefore, BCI technologies require accurate recognition and classification systems of the brain responses to a variety of motor imagery (MI) movements through electroencephalogram (EEG) signals. Nowadays, deep learning models have received a lot of interest for learning features and classifying numerous kinds of data. However, it has not been thoroughly investigated for EEG signal classification. In this study, to reach an improved performance in MI‐EEG signal classification, we proposed a new strategy based on continuous wavelet transform (CWT)‐based time‐frequency maps to generate two‐dimensional (2D) EEG images of adaptively reconstructed signals from the extracted multivariate mode decomposition (MVMD) modes. In the next step, the extracted signals are projected in the new space using the proposed multiclass common spatial pattern (MCCSP) filtering. The resulting images are then fed to the convolutional neural network (CNN) architectures (AlexNet and LeNet). The proposed framework has the benefit of achieving high classification accuracy to be attained even with a significant amount of input data. LeNet and AlexNet reach the best average accuracy rates of 95.33% and 93.66% on dataset 1 from BCI competition IV and the results on dataset 2a from competition IV are more promising than the current state of the arts. Our results depict that MI‐EEG task recognition using image classification approaches, along with CNNs, is as comparable or even superior to other existing traditional approaches and provides high potential for upcoming research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call