Abstract
Sign language serves as a vital communication medium for the Deaf and Hard of Hearing (DHH) community, yet its recognition by computational systems remains a complex challenge. This research paper presents a novel approach to sign language detection utilizing action recognition principles through a Long Short-Term Memory (LSTM) deep learning model. Leveraging the temporal dynamics and sequential nature of sign language gestures, the LSTM model is trained to accurately identify and classify signs from video data. The proposed system processes video sequences to extract key frame features, which are then input into the LSTM network. The model's architecture is designed to capture the temporal dependencies and nuanced movements characteristic of sign language. We utilize a comprehensive dataset comprising diverse sign language gestures to train and evaluate the model. Our experimental results demonstrate that the LSTM-based approach achieves high accuracy in sign language detection, outperforming traditional static frame-based methods. The system's performance is evaluated through various metrics, including precision, recall, and F1-score, showcasing its robustness in real-world scenarios. Keywords: Gesture Recognition, Deep Learning(DL), Sign Language Recognition(SLR), TensorFlow, Matplotlib, Mediapipe, opencv-python, numpy.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.