Abstract

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.

Highlights

  • Non-invasive brain-computer interfaces (BCIs) are intelligent systems that enables users to communicate with external devices such as computers or neural prostheses without the involvement of peripheral nerves and muscles

  • Decision Tree (DT) performs the worst with a mean accuracy of 67%. According to their performance with the Quadratic Linear Discrimination Analysis (QLDA) classifier, the 20 participants could be classified into three groups: (G1) Participants S3 and S14 achieved a mean accuracy below 75%. (G2) Participants S1, S2, S4, S5, S7, S8, S9, S10, S11, S12, S13, S15, S16, S17, S19, and S20 achieved a mean accuracy between 75% to 79%

  • It should be noted that an average mean accuracy of 75% was obtained using the wavelet method when tested with QLDA

Read more

Summary

Introduction

Non-invasive brain-computer interfaces (BCIs) are intelligent systems that enables users to communicate with external devices such as computers or neural prostheses without the involvement of peripheral nerves and muscles. BCI-based motor imagery (MI) describes a mental process in which a person solely imagines to perform a certain. The design of more stable classification methods is comprising formidable challenges. One of these challenges stems from the low signal-to-noise ratio (SNR) of electroencephalography (EEG) signals as well as the high variability in recordings across trials and within participants, which puts the identification of more robust and discriminative features in EEG-based BCI systems under high demand. This work focuses on the decoding of two MIs, namely left-and right-hand movements. It shows the potential of using deep learning methods to classify binary

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call