Abstract

AbstractSign Language Recognition aims at providing an efficient and accurate mechanism for the recognition of hand gestures made in sign languages and converting them into text and speech. Sign language is a means of communication using bodily movements, especially using the hands and arms. With sign language recognition methods, dialog communication between the deaf and hearing society can become a reality. In this project, we carry out sign language recognition by building 3D convolutional neural network (3DCNN) models that can perform multi-class prediction on input videos containing hand gestures. On detection of the input, both text and speech are generated and presented as output to the user. In addition to this, we also implement real-time video recognition and continuous sign language recognition for multi-word videos. We present a method for recognition of words in three languages – Tamil Sign Language (TSL), Indian Sign Language (ISL), and American Sign Language (ASL), and outperform state-of-the-art alternatives in with accuracies of 97.5%, 99.75% and 98% respectively. Keywords3DCNNSign language recognitionVideo Classification

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call