Abstract

Abstract: This system provides a simple solution to a complex problem faced by deaf and dumb community. This paper visualizes the development web application of a real-time image recognition system whose main aim is to convert or translate the hand signs used in sign language in textual or visual and audio form output by using artificial neural networks. This system used computer vision to see signs as an input for processing and makes accurate predictions by using deep learning methods in realtime. By using artificial neural networks (ANN), the system achieves better performance in predicting or recognizing hand signs. The system makes communication between people using sign language and people who don’t know sign language effective by providing output as an audio and texted / visual, which even helps people in both ways. The project aims only to provide the society a solution by helping people who use sign language to express themselves to make it more effective and give them an ability to speak by using the technologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call