Abstract

Communication is an essential need for every person in society. This socializing can be in audio, video and text forms. Gestures are the natural expressions of communication to facilitate a specific meaning. These gestures are combined with facial expressions to form a tool for the speech impaired and the hearing impaired which is known as Sign language. It varies according to the country’s native language as American Sign Language, British Sign Language, Japanese Sign Language, Indian Sign Language, etc. The researches in the field of SL recognition have been increased tremendously in the last 10 decades. This paper mainly aims at developing a Human Machine Interface based on gestures. Indian Sign Language is a visual-gestural language used to bridge the gap of differences within society and speech and hearing impaired, exclusion of translators and independent expressiveness. This system is designed with a wearable glove utilizing ten flex sensors and two accelerometers to recognize the words in the sign language vocabulary. The classified results are sent to voice module, where the voice corresponding to the gesture is played back through a speaker. The results of the first version of glove without accelerometers had an accuracy of 74.12%. The accuracy was improved to 97.2% in the second version by the placement of accelerometers over the back side of palm on both hands. Both the versions were verified with datasets varying with gender and signer. The proposed glove excels the existing gloves on constraints of sign misclassification, expenses and others related to image-based gesture recognition. Future extensions of this glove would be modification for other country’s sign language recognition or gesture-based controlled devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call