Abstract

Motor imagery (MI) tasks of different body parts have been successfully decoded by conventional classifiers, such as LDA and SVM. On the other hand, decoding MI tasks within the same limb is a challenging problem with these classifiers; however, it would provide more options to control robotic devices. This work proposes to improve the hand MI tasks decoding within the same limb in a brain-computer interface using convolutional neural networks (CNNs); the CNN EEGNet, LDA, and SVM classifiers were evaluated for two (flexion/extension) and three (flexion/extension/grasping) MI tasks. Our approach is the first attempt to apply CNNs for solving this problem to our best knowledge. In addition, visual and electrotactile stimulation were included as BCI training reinforcement after the MI task similar to feedback sessions; then, they were compared. The EEGNet achieved maximum mean accuracies of 78.46% (±12.50%) and 76.72% (±11.67%) for two and three classes, respectively. Outperforming conventional classifiers with results around 60% and 48%, and similar works with results lower than 67% and 75%, respectively. Moreover, the electrical stimulation over the visual stimulus was not significant during the calibration session. The deep learning scheme enhanced the decoding of MI tasks within the same limb against the conventional framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call