Abstract

Taking into account the complications faced by pupils of speech and hearing impaired, we would like to present a tool that connects the gap in communication and facilitate better interaction. In situations where a spoken person is untrained with sign language, the need is unavoidable for a sign language interpreter in order to setup an interchange of expounding. We propose a system that allows two-way conversation between deaf people and people with other voices. In this paper, we present an efficient prototype in two stages. In the early stages, sign language gestures are fed into the system in real time using the device's computer vision capabilities. These gestures are then recognized using our Deep Neural Network while the fine-tuned hand detection with edge detection algorithm interprets it as text as well as audio. The second stage is to convert the audio to text and optionally display the relevant hand gestures for the same The system can recognize more than 300 sign words in Indian Sign Language. .Key Word: Concurrent Neural Networks, Natural Language Processing, Machine Learning, Sign Language Converter, Computer Vision

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call