Abstract

Individuals with hearing impairments communicate mostly through sign language. Our goal was to create an American Sign Language recognition dataset and utilize it in a neural network-based machine learning model that can interpret hand gestures and positions into natural language. In our study, we incorporated the SVM, CNN and Resnet-18 models to enhance predictability when interpreting ASL signs through this new dataset, which includes provisions such as lighting and distance limitations. Our research also features comparison results between all the other models implemented under invariant conditions versus those using our proposed CNN model. As demonstrated by its high levels of precision at 95.10% despite changes encountered during testing procedures like varying data sets or scene configurations where losses are minimal (0.545), there exists great potential for future applications in image recognition systems requiring deep learning techniques. Furthermore, these advancements may lead to significant improvements within various fields related explicitly to speech-language therapy sessions designed specifically around helping people overcome challenges associated with deafness while building bridges towards improved social integration opportunities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call