Abstract

Objective. Motor imagery (MI) electroencephalography (EEG) classification is regarded as a promising technology for brain–computer interface (BCI) systems, which help people to communicate with the outside world using neural activities. However, decoding human intent accurately is a challenging task because of its small signal-to-noise ratio and non-stationary characteristics. Methods that directly extract features from raw EEG signals ignores key frequency domain information. One of the challenges in MI classification tasks is finding a way to supplement the frequency domain information ignored by the raw EEG signal. Approach. In this study, we fuse different models using their complementary characteristics to develop a multiscale space-time-frequency feature-guided multitask learning convolutional neural network (CNN) architecture. The proposed method consists of four modules: the space-time feature-based representation module, time-frequency feature-based representation module, multimodal fused feature-guided generation module, and classification module. The proposed framework is based on multitask learning. The four modules are trained using three tasks simultaneously and jointly optimized. Results. The proposed method is evaluated using three public challenge datasets. Through quantitative analysis, we demonstrate that our proposed method outperforms most state-of-the-art machine learning and deep learning techniques for EEG classification, thereby demonstrating the robustness and effectiveness of our method. Moreover, the proposed method is employed to realize control of robot based on EEG signal, verifying its feasibility in real-time applications. Significance. To the best of our knowledge, a deep CNN architecture that fuses different input cases, which have complementary characteristics, has not been applied to BCI tasks. Because of the interaction of the three tasks in the multitask learning architecture, our method can improve the generalization and accuracy of subject-dependent and subject-independent methods with limited annotated data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call