Abstract

Sign language is a system for communication using visual gestures and signs for deaf and dumb people. This language is especially used by people who can’t speak or hear. Sign language is oldest and natural form of language for communication, since most of the people are not aware of sign language, hence it is tedious to understand, for solving this problem we have built a real time method using convolution neural networks (CNN) for hand-based gestures to detect sign language. In our CNN model, first the hand is passed through a filter and then the filter is applied. Finally, the hand is passed through a classifier which predicts the class of the hand gestures. Our model provides 98% accuracy for the alphabet A-Z letters. On the other hand, the most common hurdle deaf and dumb come across is communication with the normal people and with the fact i.e., each and every normal person doesn't know the sign language. Another main feature of the project is to create a communication system for the deaf people. The functionality of these part of the system is to translate audio message to the corresponding sign language. This part of the system takes the audio message as input, and converts the audio recorded message into the respective sign image and videos and displays the relevant American sign language or GIFs which we have already defined. With the aid of this part of the system, communication between normal and deaf people gets feasible. Overall, the main idea of the project is to make a system which can help deaf and dumb to interact with normal people.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call