Abstract

Sign Language Recognition (SLR) recognizes hand gestures and produces the corresponding text or speech. Despite advances in deep learning, the SLR still faces challenges in terms of accuracy and visual quality. Sign Language Translation (SLT) aims to translate sign language images or videos into spoken language, which is hampered by limited language comprehension datasets. This paper presents an innovative approach for sign language recognition and conversion to text using a custom dataset containing 15 different classes, each class containing 70-75 different images. The proposed solution uses the YOLOv5 architecture, a state-of-the-art Convolutional Neural Network (CNN) to achieve robust and accurate sign language recognition. With careful training and optimization, the model achieves impressive mAP values (average accuracy) of 92% to 99% for each of the 15 classes. An extensive dataset combined with the YOLOv5 model provides effective real-time sign language interpretation, showing the potential to improve accessibility and communication for the hearing impaired. This application lays the groundwork for further advances in sign language recognition systems with implications for inclusive technology applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call