Abstract

A deep learning model specifically designed to recognize signs in sign language is the foundation of the Sign Language Recognition system. Sign Language is a visual language used by the deaf and hard of hearing community to communicate with one another and the general public. Sign language is a kind of nonverbal communication based on the use of hand gestures. The ability to communicate socially and emotionally is greatly aided when the speech and hearing challenged have access to sign language. The model developed in this paper captures the images through live web cam and displays the sign language meaning on the screen as text output. The model is trained and built by deep learning framework using Convolution Neural Networks (CNN) in this work. The model is trained with images of hand gestures captured through webcam using Computer Vision and then after successful training, the system performs recognition process through matching parameters for a given input gesture and finally displays the sign language meaning of the gesture as text output on the screen.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.