Abstract

Over the past two decades, significant progress has been made in brain-computer interfaces (BCIs), devices which enable direct communications between human brains and external devices. One of the prevalent control paradigms is motor imagery-based BCI (MI-BCI), by which users imagine specific actions to express their intentions. Left-hand and right-hand motor imageries are frequently used in the MI-BCI. If a third class is needed, the imagination of both feet is usually added. However, it is relatively rare to separate feet into left lower limb and right limb in MI-BCI systems. In addition, previous studies have demonstrated that real movements can be distinguished from one another via processing the electroencephalogram (EEG). Similarly, motor imagery (MI) and movement observations (MO) can also be distinguished from one another. However, classification of left lower limb actions and right lower limb actions between MI, Real Movement (RM), and MO actions, has not been thoroughly explored. To address these questions, we performed a comprehensive experiment to collect EEG under six actions (i.e., Left-MI, Right-MI, Left-RM, Right-RM, Left-MO, and Right-MO) and used three models (convolutional neural network [CNN], support vector machine [SVM], and a K-Nearest Neighbours [KNN]) to classify these actions. Our CNN achieved the highest performance (37.77%) in the classification of six actions. Although the performance of SVM (37.21%) and KNN (25.26%) was worse, it is still better than the chance level (16.67%). Our results suggest that it is possible to distinguish between these six lower limb actions. This study has implications for developing multi-class BCI systems and promoting the research of multiple-action classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call