Abstract

Even though using the most natural way of communication is sign language, deaf and mute people find it challenging to socialize. A language barrier is erected between regular people and D&M individuals due to the structure of sign language, which is distinct from text. They converse by using vision-based communication as a result. The gestures can be easily understood by others if there is a standard interface that transforms sign language to visible text. As a result, R&D has been done on a vision-based interface system that will allow D&M persons to communicate without understanding one another's languages. In this project, first gathered     and acquired data and created a dataset, after which extracting useful data from the images. Keywords After verification and trained data and model using the (LSTM) algorithm, TensorFlow, and Keras technology, classified the gestures according to alphabet. Using our own dataset, this system achieved an accuracy of around 86.75% in an experimental test. system uses the (LSTM) algorithm to process images and data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call