Abstract

Hand gesture recognition (HGR) is a primary mode of communication and human involvement. While HGR can be used to enhance user interaction in human-computer interaction (HCI), it can also be used to overcome language barriers. For example, HGR could be used to recognize sign language, which is a visual language expressed by hand movements, poses, and faces, and used as a basic communication mode by deaf people around the world. This research aims to create a new method to detect dynamic hand movements, poses, and faces in sign language translation systems. The Long Short-Term Memory Modification (LSTM) approach and the Mediapipe library are used to recognize dynamic hand movements. In this study, twenty dynamic movements that match the context were designed to solve the challenge of identifying dynamic signal movements. Sequences and image processing data are collected using MediaPipe Holistic, processed, and trained using the LSTM Modification method. This model is practiced using training and validation data and a test set to evaluate it. The training evaluation results using the confusion matrix achieved an average accuracy of twenty words trained, which was 99.4% with epoch 150. The results of experiments per word showed detection accurateness of 85%, while experiments using sentences only reached 80%. The research carried out is a significant step forward in advancing the accuracy and practice of the dynamic sign language recognition system, promising better communication and accessibility for deaf people.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call