Abstract

Normal people can readily connect and communicate with one another, however, those with hearing and speech impairments find it difficult to converse with normal-hearing people without the assistance of a translator. The only way deaf and dumb people can communicate is through Sign Language. Indian Sign Language has its own grammar, syntax, vocabulary, and unique language features. We propose two methods, namely Bidirectional LSTM and BERT Transformer to address the problem of sign language translation. The proposed work is validated on standard datasets and provides promising results. The INCLUDE-50 dataset is used to validate the performance of the proposed algorithm. The deep neural network is evaluated using a combination of approaches for augmentation of the data, features extraction using the mediapipe.On the Dataset INCLUDE 50 the best performing model obtained an accuracy of 89.5%. This model employs a feature extractor that has been pre-trained, as well as an encoder and a decoder.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call