Abstract
Even though using the most natural way of communication is sign language, deaf and mute people find it challenging to socialize. A language barrier is erected between regular people and D&M individuals due to the structure of sign language, which is distinct from text. They converse by using vision-based communication as a result. The gestures can be easily understood by others if there is a standard interface that transforms sign language to visible text. As a result, R&D has been done on a vision-based interface system that will allow D&M persons to communicate without understanding one another's languages. In this project, first gathered and acquired data and created a dataset, after which extracting useful data from the images. Keywords After verification and trained data and model using the (LSTM) algorithm, TensorFlow, and Keras technology, classified the gestures according to alphabet. Using our own dataset, this system achieved an accuracy of around 86.75% in an experimental test. system uses the (LSTM) algorithm to process images and data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal on Recent and Innovation Trends in Computing and Communication
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.