Abstract

Abstract: Speech impairment is a complicated condition that impairs a person's capacity for verbal and audible communication. Those who are impacted frequently use sign language and other alternate forms of communication. While sign language has gained popularity, bridging the communication gap between those who sign and those who don't remains a challenge. Our project endeavors to address this issue by developing an innovative application that offers real-time sign language-to-text translation. This technology aims to facilitate seamless communication between those who use sign language and those who do not. To achieve this, we have constructed a cutting-edge sign language recognition system as part of our project. Our system primarily utilizes American Sign Language (ASL) as its foundation. To accurately detect gestures, we employ a Convolutional Neural Network (CNN) with Inception V3 as the underlying model. This project's core objective is to harness Machine Learning Techniques for converting ASL hand gestures into text and vice versa. We go beyond mere translation by enabling real-time American Sign Language interpretation through single-hand gestures. Furthermore, our system can recognize ASL words, converting them into text before rendering them into audible speech.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call