Abstract

The most common way of communication for a speech-impaired person is Sign Language. Generally, people do not learn Sign Language for communication with deaf and dumb people which causes their isolation. Sign language is an ancient language for communication that comes naturally, but since people are unaware of systematic sign language and the person cannot have a translator every time with him, there is a need of a mediator system which can translate sign language. For that purpose, a real-time method is presented in this paper where deep learning is used for ASL translation. The main aim of this project is to design a system that could identify the alphabets of American Sign Language that are being signed. The camera captures the frames of the hand being signed and then passed through a filter and later through a classifier for prediction of the class of the hand gestures. The proposed system is an initial step for creating a translator for sign language for making the communication easier. The result is an HCI system that enables people to communicate with D&M people without sign language being known.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.