Abstract
This research presents a brain-computer interface (BCI) framework for brain signal classification using deep learning (DL) and machine learning (ML) approaches on functional near-infrared spectroscopy (fNIRS) signals. fNIRS signals of motor execution for walking and rest tasks are acquired from the primary motor cortex in the brain’s left hemisphere for nine subjects. DL algorithms, including convolutional neural networks (CNNs), long short-term memory (LSTM), and bidirectional LSTM (Bi-LSTM) are used to achieve average classification accuracies of 88.50%, 84.24%, and 85.13%, respectively. For comparison purposes, three conventional ML algorithms, support vector machine (SVM), k-nearest neighbor (k-NN), and linear discriminant analysis (LDA) are also used for classification, resulting in average classification accuracies of 73.91%, 74.24%, and 65.85%, respectively. This study successfully demonstrates that the enhanced performance of fNIRS-BCI can be achieved in terms of classification accuracy using DL approaches compared to conventional ML approaches. Furthermore, the control commands generated by these classifiers can be used to initiate and stop the gait cycle of the lower limb exoskeleton for gait rehabilitation.
Highlights
The world has been striving to create a communication channel based on signals obtained from the brain
The results evaluated for all the methods used in this study are presented including the validation of the methods
The manually extracted features from functional near-infrared spectroscopy (fNIRS) data of walking and rest states of nine subjects are fed to the three conventional machine learning (ML) algorithms, support vector machine (SVM), k-nearest neighbor (k-NN), and linear linear discriminant analysis (LDA), and the highest accuracies obtained were 78.90%, 77.01%, and 66.70%
Summary
The world has been striving to create a communication channel based on signals obtained from the brain. A brain-computer interface (BCI) is a communication system that provides its users with control channels independent of the brain’s output channel to control external devices using brain activity [1,2]. The first stage is the brain-signal acquisition using a neuroimaging modality. The second is preprocessing those signals as they contain physiological noises and motion artefacts [4]. The third stage is feature extraction in which meaningful features are selected [5]. These features are classified using suitable classifiers. The final stage is the application interface in which the classified BCI signals are given to an external device as a control command [6]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.