Abstract

Communicating with hearing-impaired individuals poses a significant challenge. However, with the advancement of computer vision, automatic sign language recognition (SLR) is gradually addressing this issue and has made significant improvements. One of the key challenges in SLR lies in accurately capturing and interpreting the subtle nuances and variations in sign language gestures. In this study, our focus is on recognizing isolated sign language using the LSA64 dataset, which is a small-scale dataset of Argentinian isolated sign language. We concatenated CNN and LSTM into an end-to-end sign language recognition model for an isolated sign language recognition dataset, recognition of the Argentine Sign Language dataset (LSA64). We achieved promising results in our study, obtaining a high accuracy rate of nearly 97% while ensuring that the model remained compact in size.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call