Abstract

Sign language is used by the hearing impaired people for the purpose of communication on daily basis. It is an optical language where hand gestures are used to communicate instead of voice. Although sign language has been used by the hearing impaired for a long time now, its translation presents a major challenge in communication. In this paper, the use of an image processing approach is presented for recognizing the static alphabets of sign language in two popular and widely used sign languages; American Sign Language and Indian Sign Language. Initially two datasets of preprocessed image gestures are created followed by a three layered Convolutional Neural Network being trained on the datasets. In the later stage, images are captured and pre-processed in real-time, preparing them to be recognized by the CNN model. This paper also includes an approach of translating the two sign languages to English speech and vice versa. Further, the born-deaf people have difficulty in reading text and learning written languages. Optical character recognition is used to identify text characters from images and are converted to ASL and ISL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call