Abstract

Hand gesture recognition (HGR) based on electromyography signals (EMGs) has been one of the most relevant research topics in the human–machine interfaces field in recent years. The HGR systems are aimed at identifying the moment in which a hand gesture was performed as well as the gesture category. To date, several HGR state-of-the-art methods are based mainly on supervised machine learning (ML) techniques. However, the use of reinforcement learning (RL) approaches to classify EMGs has not yet been thoroughly evaluated. Moreover, the behavior of HGR systems based on ML and RL methods on large datasets for user-general HGR systems is still an open research problem. In the present work, we compare a supervised learning with a reinforcement learning HGR system, which are composed of the following stages: pre-processing, feature extraction, classification, and post-processing. We compared the performance of using both a supervised and a reinforcement learning method to classify and recognize EMGs for six different hand gestures. We performed experiments by using training, validation, and test sets on the EMG-EPN-612 public dataset, and the results were evaluated for user-general HGR models. The final accuracy results on the test set demonstrate that the best model was obtained for the supervised learning method, reaching up to 90.49%±9.7% and 86.83%±11.30% for classification and recognition accuracy respectively. The results obtained in this work demonstrated that supervised learning methods outperform reinforcement learning methods for user-general HGR systems based on EMGs for the EMG-EPN-612 dataset distribution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call