Abstract

The ability to control external devices through thought is increasingly becoming a reality. Human beings can use the electrical signals of their brain to interact or change the surrounding environment and more. The development of this technology called brain-computer interface (BCI) will increasingly allow people with motor disabilities to communicate or use assistive devices to walk, manipulate objects and communicate. Using data from the PhysioNet database, this study implemented a pattern classification system for use in a BCI on 109 healthy volunteers during real movement activities and motor imagery recorded by 64-channels electroencephalography (EEG) system. Different classifiers such as Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Decision Trees (TREE) were applied on different combinations of EEG channels. Starting from two channels (C3, C4 and CP3 and CP4) positioned on the contralateral and ipsilateral sensorimotor cortex, the Region of Interest (RoI) centred on C3/Cp3 and C4/Cp4 and, finally, a data-driven automatic channels selection was tested to explore the best channel combination able to increase the classification accuracy. The results showed that the proposed automatic channels selection was able to significantly improve the performance of each classifier achieving 98% of accuracy for classification of real and imagined hand movement (sensitivity = 97%, specificity = 99%, AUC = 0.99) by SVM. While the accuracy of the classification between the imagery of hand and foot movements was 91% (sensitivity = 87%, specificity = 86%, AUC = 0.93) also with SVM. In the proposed approach, the data-driven automatic channels selection outperforms classical a priori channel selection models such as C3/C4, Cp3/Cp4, or RoIs around those channels with the utmost accuracy to help remove the boundaries of human communication and improve the quality of life of people with disabilities.

Highlights

  • The number of people who suffer from temporal or permanent movement disabilities is enormously growing worldwide

  • The research on Brain-Computer Interface (BCI) started in 1973 [5] when a group of scientists at the University of California proposed for the first time the BCI expression and launched the BCI challenge, which aims to control an external object using the brain signal recorded by Electroencephalography (EEG) [6]

  • With the aim to define a pattern classification system for use in a BCI to classify between the real and the imagined movements, facilitating the interaction between people with limited motor abilities and their environment, a PhysioNet data set has been used [35], that includes EEG signals acquired during the performance of four different tasks: Real Hand Movement (RHM); Imagery Hand Movement (IHM); Real Fists or Feet Movement (RFM), and Imagery Fists or Feet Movement (IFM)

Read more

Summary

Introduction

The number of people who suffer from temporal or permanent movement disabilities is enormously growing worldwide. The 1988 work on BCI robot control explicitly uses [9] an artificial intelligence (AI) algorithm with machine learning (training) period and examination (testing) period. The development of the AI algorithms provided a learning pattern recognition approach [10], helping to increase the quantity and the quality of the BCI research in general and, in particular, on the Motor Imagery (MI). In this paradigm, the visuomotor imagery has been used to replace the real execution of the movement [11]. Their mental imagery can work properly [12]

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call