Abstract

Abstract: This paper focuses on the evolution of a real-time sign language detection model using computer vision, machine learning, and deep learning. Its goal is to narrow the poor communication for the speaking and hearing-impaired community by using recurrent neural networks and long short-term memory models. The proposed system utilizes a dataset comprising sign language gestures captured in various contexts and by different signers. Preprocessing techniques are applied to draw out fitting features from the video frames, including hand movements, facial expressions, and body postures. The LSTM neural network architecture is chosen to grab temporal dependencies in sequential data, making it suitable for the robust nature of sign language. The training process involves optimizing the LSTM network on the labeled dataset, incorporating techniques such as transfer learning and data augmentation to enhance model generalization. The resulting model is capable of recognizing a divergent set of sign language actions in real-time

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call