Abstract

Hand gesture recognition tasks based on surface electromyography (sEMG) are vital in human-computer interaction, speech detection, robot control, and rehabilitation applications. However, existing models, whether traditional machine learnings (ML) or other state-of-the-arts, are limited in the number of movements. Targeting a large number of gesture classes, more data features such as temporal information should be persisted as much as possible. In the field of sEMG-based recognitions, the recurrent convolutional neural network (RCNN) is an advanced method due to the sequential characteristic of sEMG signals. However, the invariance of the pooling layer damages important temporal information. In the all convolutional neural network (ACNN), because of the feature-mixing convolution operation, a same output can be received from completely different inputs. This paper proposes a concatenate feature fusion (CFF) strategy and a novel concatenate feature fusion recurrent convolutional neural network (CFF-RCNN). In CFF-RCNN, a max-pooling layer and a 2-stride convolutional layer are concatenated together to replace the conventional simple dimensionality reduction layer. The featurewise pooling operation serves as a signal amplitude detector without using any parameter. The feature-mixing convolution operation calculates the contextual information. Complete evaluations are made on both the accuracy and convergence speed of the CFF-RCNN. Experiments are conducted using three sEMG benchmark databases named DB1, DB2 and DB4 from the NinaPro database. With more than 50 gestures, the classification accuracies of the CFF-RCNN are 88.87% on DB1, 99.51% on DB2, and 99.29% on DB4. These accuracies are the highest compared with reported accuracies of machine learnings and other state-of-the-art methods. To achieve accuracies of 86%, 99% and 98% for the RCNN, the training time are 2353.686 s, 816.173 s and 731.771 s, respectively. However, for the CFF-RCNN to reach the same accuracies, it needs only 1727.415 s, 542.245 s and 576.734 s, corresponding to a reduction of 26.61%, 33.56% and 21.19% in training time. We concluded that the CFF-RCNN is an improved method when classifying a large number of hand gestures. The CFF strategy significantly improved model performance with higher accuracy and faster convergence as compared to traditional RCNN.

Highlights

  • Electromyography (EMG) measures bioelectric currents produced by motor units during muscle contraction [1]

  • This paper proposes a concatenate feature fusion recurrent convolutional neural network (CFF-RCNN) structure based on a traditional RCNN network, which consists of a 4-layer convolutional neural networks (CNN) and a long short-term memory network (LSTM)

  • CFF-RCNN model reaches an accuracy of 88.87% and improves the accuracy by at least 1.50% compared to the RCNN

Read more

Summary

Introduction

Electromyography (EMG) measures bioelectric currents produced by motor units during muscle contraction [1]. Surface EMG (sEMG) detects the sum of the motor unit action potential (MUAP) over the skin [2]. A traditional method for sEMG-based recognition is machine learning, which in general is not inherently efficient or scalable enough to handle massive datasets [11]. For simple pattern recognition (PR) based on sEMG signals [12–15], methods including linear discriminate analysis (LDA), k-nearest neighbor (KNN), principal component analysis (PCA), and artificial neural network (ANN) are usually chosen. Because of the stochastic nature of biological signals [16], signal preprocessing and feature extraction are necessary steps when applying these algorithms [17]. Data preprocessing, such as filtering, may result in the loss of valid information [18]. Feature extraction in machine learning is time-consuming and error-prone as it requires specialization, which significantly increases chances of reduced classification accuracy [19]

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.