Abstract

The main aim of the project is to create a user-friendly interface between normal people and deaf and dumb people. Humans communicate with one another using natural language channels such as words and writing, or by body language (gestures) such as hand motions, head gestures, facial expression, lip motion, and so forth. Comprehending sign language is equally as vital as understanding natural language. The predominant mode of communication is sign language among the deaf and the hearing-impaired people all over the world. sign language said to have different gestures where each gesture has specific meaning. Hearing-impaired people use sign language as their primary mode of communication. Without a translation, people with hearing impairments have difficulty speaking with other hearing people. As a result, implementing a system that recognises sign language would have a substantial positive impact on the social lives of deaf people. We propose a modified for continuous sequences of gestures, the LSTM model is used. also known as continuous SLD, that detects a series of related motions in this research. It is based on the division the division of a continuous symbol into sub-units and the modelling of these sub-units using neural networks. As a result, different combinations of sub-units are not necessary to be considered in the course of training. The suggestion is that approach has been put to the test with the help of Indian Sign Language signed sentences (ISL). Different sign terms are used to recognise these sign sentences.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call