Abstract

Today, daily life is composed of many computing systems, therefore interacting with them in a natural way makes the communication process more comfortable. Human–Computer Interaction (HCI) has been developed to overcome the communication barriers between humans and computers. One form of HCI is Hand Gesture Recognition (HGR), which predicts the class and the instant of execution of a given movement of the hand. One possible input for these models is surface electromyography (EMG), which records the electrical activity of skeletal muscles. EMG signals contain information about the intention of movement generated by the human brain. This systematic literature review analyses the state-of-the-art of real-time hand gesture recognition models using EMG data and machine learning. We selected and assessed 65 primary studies following the Kitchenham methodology. Based on a common structure of machine learning-based systems, we analyzed the structure of the proposed models and standardized concepts in regard to the types of models, data acquisition, segmentation, preprocessing, feature extraction, classification, postprocessing, real-time processing, types of gestures, and evaluation metrics. Finally, we also identified trends and gaps that could open new directions of work for future research in the area of gesture recognition using EMG.

Highlights

  • The increase in computing power has brought the presence of many computing devices in the daily life of human beings

  • Some selected primary studies (SPS) presented more than one Hand Gesture Recognition (HGR) model, we selected the models with the best performance in the evaluation; we used 65 HGR models for this review

  • The other 10 general models only used EMG data from people who participated in the training; it is not possible to conclude that these 10 models are able to recognize gestures of any person

Read more

Summary

Introduction

The increase in computing power has brought the presence of many computing devices in the daily life of human beings. A broad spectrum of applications and interfaces have been developed so that humans can interact with them The interaction with these systems is easier when they tend to be performed in a natural way (i.e., just as humans interact with each other using voice or gestures). HGR models are human–computer systems that determine what gesture was performed and when a person performed the gesture. These systems are used, for example, in several applications, such as intelligent prostheses [1,2,3], sign language recognition [4,5], rehabilitation devices [6,7], and device control [8]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call