Abstract

People lacking the sense of hearing and the ability to speak have undeniable communication problems in their life. People with hearing and speech problems communicate using sign language with themselves and others. Sign language is not essentially known to a more significant portion of the human population who uses spoken and written language for communication. Therefore, it is a necessity to develop technological tools for interpretation of sign language. Much research have been carried out to acknowledge sign language using technology for most global languages. But there are still scopes of development of tools and techniques for sign language development for local dialects. There are 22 modern Indian languages and more than 19000 languages that are spoken regionally as mother tongue. This work attempts to develop a technical approach for recognizing Assamese Sign Language, which is one of the 22 modern languages of India. Using machine learning techniques, this work tried to establish a system for identifying the hand gestures from Assamese Sign Language. A combination of two-dimensional and three-dimensional images of Assamese gestures has been used to prepare a dataset. The MediaPipe framework has been implemented to detect landmarks in the images. An Assamese Sign Language dataset of 2094 data points has been generated, including nine static gestures with vowels and consonants (অ, ই, ঈ, উ, এ, ও, ক, জ, ল) from the Assamese Sign Language. The dataset was used for training of a feed-forward neural network. The model yielded an accuracy of 99%. The results reveal that the method implemented in this work is effective for the recognition of the other alphabets and gestures in the Assamese Sign Language. This method could also be tried and tested for the recognition of signs and gestures for various other local languages of India.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call