Abstract

The proposed system aims to help normal people understand the communication of speech impaired individuals through hand gestures recognition and generating animation gestures. The system focuses on recognizing different hand gestures and converting them into information that is understandable by normal people. YOLOv8 model, a state-of-the-art object detection algorithm, is being employed in this system to detect and classify sign language gestures. Sign language video generation can act as a guide for anyone who is in the process of learning sign language, by providing them with expressive sign language videos using avatars that can translate the user inputs to sign language videos. CWASA Package and SiGML files are used for this process. The project contributes to the advancement of assistive technologies for the hearing-impaired community, offering innovative solutions for sign language recognition and video generation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call