Abstract

Brain-computer interface (BCI) is a practical pathway to interpret users’ intentions by decoding motor execution (ME) or motor imagery (MI) from electroencephalogram (EEG) signals. However, developing a BCI system driven by ME or MI is challenging, particularly in the case of containing continual and compound muscles movements. This study analyzes three grasping actions from EEG under both ME and MI paradigms. We also investigate the classification performance in offline and pseudo-online experiments. We propose a novel approach that uses muscle activity pattern (MAP) images for the convolutional neural network (CNN) to improve classification accuracy. We record the EEG and electromyogram (EMG) signals simultaneously and create the MAP images by decoding both signals to estimate specific hand grasping. As a result, we obtained an average classification accuracy of 63.6(±6.7)% in ME and 45.8(±4.4)% in MI across all fifteen subjects for four classes. Also, we performed pseudo-online experiments and obtained classification accuracies of 60.5(±8.4)% in ME and 42.7(±6.8)% in MI. The proposed method MAP-CNN, shows stable classification performance, even in the pseudo-online experiment. We expect that MAP-CNN could be used in various BCI applications in the future.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call