Abstract

Gesture classification is one of the most emerging domains of today’s era. Among the major applications of gesture classification, sign gesture recognition is considered the core class of this domain. Keeping in mind the specifications of the region and language-based variations, the development of a standard sign gesture classifier is the main objective of today’s research. So far many language translators have been utilized to capture this communication deficiency. Software simulations and hardware prototype-based different models have been implemented using vision-based, sensor-based, and hybrid classifiers. This article focuses on the flex, accelerometer, and gyroscope sensors-based smart prototype which is designed to capture sign gestures. These sensors are placed on a glove to capture and compile digits i.e. 0 to 10, alphabets i.e. A–Z, and alphanumeric i.e. 0–10, A–Z datasets. The main objective of the proposed model is to classify sign gestures, made by deaf–mute individuals and recognize the actual meaning of performed gestures. K-nearest neighbor, discriminant analysis, and support vector machine-based machine learning algorithms are trained on our manually compiled datasets that are succeeded in achieving an average accuracy of 99.18%, 99.03%, and 99.82%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call