Abstract

Sign Language used around the world by deaf community for communication. To enable conversation between sign language users and non-signers, an electronic sign language translator may prove useful. In this paper, classification of 50 signs from the Indian sign language is performed using data acquired from multiple surface electromyograms, and gyroscopes and accelerometers on one forearm of the signers. A novel multistage classification of signs with ensemble machine learning models is proposed. First, a binary classifier is used to classify the sign as a static posture or containing dynamic motion of hand. Then, the sign is classified using one of the two multi-class classifiers, each trained to classify the sign from among the set of static signs or the dynamic signs. Random forest (RF) and extreme gradient boosting machine compared for determining the features important for classification of signs from the two categories. It is observed that more of the electromyogram features are important for classification of static signs, while those extracted from accelerometer and gyroscope signals are important for classification of dynamic signs. The proposed multi-stage classification approach gave an overall accuracy of classification increases to a maximum of 98% with RF, which is greater than that achieved when a single classifier is used to classify all the signs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call