Abstract

Although not a global language, sign language is an essential tool for the deaf community. Communication between these communities and hearing population is severely hampered by this, as human-based interpretation can be both costly and time-consuming. In this paper, we present a real-time American Sign Language (ASL) generation and recognition system that makes use of Convolutional Neural Networks and deep learning (CNNs). Despite differences in lighting, skin tones, and backdrops, our technology is capable of correctly identifying and generating ASL signs. We trained our model on a large dataset of ASL signs in order to obtain a high level of accuracy. Our findings show that, with accuracy rates of 98.53% and 98.84%, respectively, our system achieves high accuracy rates in both training and validation. Our approach uses the advantages of CNNs to accomplish quick and precise recognition of individual letters and words, making it particularly effective for sign fingerspelling recognition. We believe that our technology has the ability to transform communication between the hearing community and the deaf and hard-of-hearing communities by providing a dependable and cost-effective way of sign language interpretation. Our method could help people who use sign language communicate more easily and live better in a range of environments, including schools, hospitals, and public places.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call