Abstract
People around the world with speech and hearing impairment use a media of communication known as ‘Sign language’. In recent times, Sign language is omnipresent. However, there exists a challenge for people who do not know sign language, to communicate with people who can communicate exclusively using sign language. This gap can be bridged by using technologies of recent times to recognize gestures and design intuitive systems with deep learning. The aim of this paper is to recognise American Sign Language gestures dynamically and create an intuitive system which provides sign language translation to text and speech of various languages. The system uses Convolutional neural network, natural language processing, language translation and text-to-speech algorithms. It is capable of recognizing hand gestures dynamically and predicting the corresponding letters to form a desired sentence accurately.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have