Abstract

Hand gesture recognition based on surface electromyography (sEMG) plays an important role in the field of biomedical and rehabilitation engineering. Recently, there is a remarkable progress in gesture recognition using high-density surface electromyography (HD-sEMG) recorded by sensor arrays. On the other hand, robust gesture recognition using multichannel sEMG recorded by sparsely placed sensors remains a major challenge. In the context of multiview deep learning, this paper presents a hierarchical view pooling network (HVPN) framework, which improves multichannel sEMG-based gesture recognition by learning not only view-specific deep features but also view-shared deep features from hierarchically pooled multiview feature spaces. Extensive intrasubject and intersubject evaluations were conducted on the large-scale noninvasive adaptive prosthetics (NinaPro) database to comprehensively evaluate our proposed HVPN framework. Results showed that when using 200 ms sliding windows to segment data, the proposed HVPN framework could achieve the intrasubject gesture recognition accuracy of 88.4%, 85.8%, 68.2%, 72.9%, and 90.3% and the intersubject gesture recognition accuracy of 84.9%, 82.0%, 65.6%, 70.2%, and 88.9% on the first five subdatabases of NinaPro, respectively, which outperformed the state-of-the-art methods.

Highlights

  • As a noninvasive approach of establishing links between muscles and devices, the surface electromyography- based neural interface, known as the muscle computer interface (MCI), has been widely studied in the past decade

  • Surface electromyography is a type of biomedical signal recorded by noninvasive electrodes placed on human skin [1]; it is the spatiotemporal superposition of motor unit action potential (MUAP) generated by all active motor units (MU) at different depths within the recording area [2]. surface electromyography- (sEMG) recorded from subject’s forearm measures muscular activity of his/her hand movements, can be used for hand gesture recognition

  • The abovementioned three views of multichannel sEMG were proven to be the most discriminative views for gesture recognition in [31], the construction of them still requires a lot of computational time and resources, as well as their high-dimensionality results in the increase of the number of neural network parameters, making us consider the trade-off between gesture recognition accuracy and computational complexity. us, in this paper, we evaluated a “two-view” configuration, which selected the two most discriminative views (i.e., v1 and v2, represented by images of discrete wavelet packet transform coefficients (DWPTC) and discrete wavelet transform coefficients (DWTC), resp.) out of these three views of multichannel sEMG and used them as the input of the proposed hierarchical view pooling network (HVPN) framework

Read more

Summary

Introduction

As a noninvasive approach of establishing links between muscles and devices, the surface electromyography- (sEMG) based neural interface, known as the muscle computer interface (MCI), has been widely studied in the past decade. Over the past five years, feature learning approaches based on convolutional neural networks (CNNs) have shown promising success in HD-sEMGbased gesture recognition, that is, achieving >90% recognition accuracy in classifying a large set of gestures [11], and almost 100% recognition accuracy in classifying a small set of gestures [14, 15], because HD-sEMG signals contain both spatial and temporal information of muscle activity [16]. Aiming at improving multichannel sEMG-based gesture recognition via better learning of view-shared deep features, in this paper, we proposed a hierarchical view pooling network (HVPN) framework, in which viewshared feature spaces were hierarchically pooled from multiview low-level features for view-shared learning.

Materials and Methods
Experiments
Results and Discussion
Evaluation methodology
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.