Abstract

A genuine disability prevents a person from speaking. There are numerous ways for people with this condition to communicate with others, including sign language, which is one of the more widely used forms of communication. Human body language can be used to communicate with one another using sign language, where each word is represented by a specific sequence of gestures. The goal of the paper is to translate human sign language into speech that can interpret human gestures. Through a deep convolution neural network, we first construct the data-set, save the hand gestures in the database, and then use an appropriate model on these hand gesture visuals to test and train the system. When a user launches the application, it then detects the gestures that are saved inthe database and displays the corresponding results. By employing this system, it is possible to assist those who are hard of hearing while simultaneously making communication with them simpler for everyone else.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call