Abstract

In recent years, several software and hardware solutions have been proposed for object detection, motion tracking, and gesture identification. However, these solutions failed to efficiently identify appropriate gestures and track their motion in a stipulated time range. To overcome the above-mentioned challenges, we propose a novel sign language translator application, which uses the MediaPipe Holistic model along with Long Short-Term Memory (LSTM), integrated with Neural Network (NN) model to build the proposed translator model. This model will pick up the gestures from different angles quickly and accurately to translate them into appropriate text. This paper has been divided into two parts, the first part involves research and data collection, and the second part involves training and testing the collected data for real-time implementation. This model has been trained and tested with real-time data collected from the Leap motion controller and cameras. The model provides appreciable results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call