Abstract

Brain–computer interface (BCI) based on motor imagery (MI) can control external applications by decoding different brain physiological signals, such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Traditional unimodal-based MI decoding methods cannot obtain satisfactory classification performance due to the limited representation ability in EEG or fNIRS signals. Usually, different brain signals have complementarity with different sensitivity to different MI patterns. To improve the recognition rate and generalization ability of MI, we propose a novel end-to-end multimodal multitask neural network (M2NN) model with the fusion of EEG and fNIRS signals. M2NN method integrates the spatial–temporal feature extraction module, multimodal feature fusion module, and multitask learning (MTL) module. Specifically, the MTL module includes two learning tasks, namely one main classification task for MI and one auxiliary task with deep metric learning. This approach was evaluated using a public multimodal dataset, and experimental results show that M2NN achieved the classification accuracy improvement of 8.92%, 6.97%, and 8.62% higher than multitask unimodal EEG signal model (MEEG), multitask unimodal HbR signal model (MHbR), and multimodal single-task (MDNN), respectively. Classification accuracies of multitasking methods of MEEG, MHbR, and M2NN are improved by 4.8%, 4.37%, and 8.62% compared with single-task methods EEG, HbR, and MDNN, respectively. The M2NN method achieved the best classification performance of the six methods, with the average accuracy of 29 subjects being 82.11% ± 7.25%. The effectiveness of multimodal fusion and MTL was verified. The M2NN method is superior to baseline and state-of-the-art (SOTA) methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call