Abstract

This study aims to increase the control's dimensions of the electroencephalography (EEG)-based brain-computer interface (BCI) systems by distinguishing between the motor imagery (MI) tasks associated with fine body-parts of the same hand, such as the wrist and fingers. This in turn can enable individuals who are suffering from transradial amputations to better control prosthetic hands and to perform various dexterous hand tasks. In particular, we present a novel three-stage framework for decoding MI tasks of the same hand. The three stages of the proposed framework are the input, feature extraction, and classification stages. At the input stage, we employ a quadratic time-frequency distribution (QTFD) to analyze the EEG signals in the joint time-frequency domain. The use of a QTFD enables to transform the EEG signals into a set of two-dimensional (2D) time-frequency images (TFIs) that describe the distribution of the energy encapsulated within the EEG signals in terms of the time, frequency, and electrode position. At the feature extraction stage, we design a new convolutional neural network (CNN) architecture that can automatically analyze and extract salient features from the TFIs created at the input stage. Finally, the features obtained at the feature extraction stage are passed to the classification stage to assign each input TFI to one of the eleven MI tasks that are considered in the current study. The performance of our proposed framework is evaluated using EEG signals that were acquired from eighteen able-bodied subjects and four transradial amputated subjects while performing eleven MI tasks within the same hand. The average classification accuracies obtained for the able-bodied and transradial amputated subjects are 73.7% and 72.8%, respectively. Moreover, our proposed framework yields 14.5% and 11.2% improvements over the results obtained for the able-bodied and transradial amputated subjects, respectively, using conventional QTFD-based handcrafted features and a multi-class support vector machine classifier. The results demonstrate the efficacy of the proposed framework to decode the MI tasks associated with the same hand for able-bodied and transradial amputated subjects.

Highlights

  • Transradial amputations can profoundly reduce the quality of life of affected individuals [1]

  • We compare the performance of our proposed framework with the performance obtained using conventional handcrafted features that are extracted from the CWD-based time-frequency representation (TFR) of the EEG signals and classified using a multi-class support vector machines (SVM) classifier

  • In this work, we demonstrated the potential of utilizing a convolutional neural network (CNN) to decode motor imagery (MI) tasks within the same hand using the CWD-based time-frequency images (TFIs) that are extracted from the EEG signals

Read more

Summary

Introduction

Transradial amputations can profoundly reduce the quality of life of affected individuals [1]. Witnessed in developing dexterous upper limb robotic prosthetics, such as robotic prosthetic hands [2], [3] These prostheses have the potential to enable individuals with transradial amputations to restore a significant part of their missing limbs [4]. In this regard, brain-computer interfaces (BCI) systems, which analyze brain activity and translate it into control commands, have been designed and employed to increase the control dimensions of the existing upper limb prostheses [5].

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call