Abstract

Online gesture classification can rely on unsupervised segmentation in order to divide the data stream into static and dynamic segments for individual classification. However, this process requires motion detection calibration and adds complexity to the classification, thus becoming an additional failure point. An alternative is the sequential (dynamic) classification of the data stream. In this study we propose the use of recurrent neural networks (RNNs) to improve the online classification of hand gestures with Electromyography (EMG) signals acquired from the forearm muscles. The proposed methodology was evaluated on the UC2018 DualMyo and the NinaPro DB5 data set. The performance of a Feed-Forward Neural Network (FFNN), a Recurrent Neural Network (RNN), a Long Short-Term Memory network (LSTM) and a Gated Recurrent Unit (GRU) are compared and discussed. Additionally, an alternative performance index, the gesture detection accuracy, is proposed to evaluate the performance of the model during online classification. It is demonstrated that the static model (FFNN) and the dynamic models (LSTM, RNN and GRU) achieve similar accuracy for both data sets, i.e., about 95% for the DualMyo and about 91% for the NinaPro DB5. Although both models had similar accuracies, the dynamic models (LSTM and GRU) have a third of the parameters, presenting smaller training and inference times. + + Long Short-Term Memory (LSTM).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call