Abstract

In this research paper, an effort has been placed to convert 24 American Sign Language (ASL) signer independent, real time hand gesture alphabets into human or machine recognizable English text. In the proposed work, the ASL hand gestures used for cognition and recognition process is completely invariant to scale, luminance, gender, and distance in the complex background of indoor location. The Viola-Jones algorithm, CIE Lab color model and canny approximation to the derivative are used for proper hand segmentation. In both the cognition and recognition process, the various features such as boundary, centroid, entropy, Hu moments, Zernike moments, Gabor filters, Histogram of Oriented Gradients (HOG) and Local Phase Quantization (LPQ) are extracted from the hand gestures. The K-Nearest Neighbor (KNN), Multiclass-Support Vector Machines (M-SVM) and Decision Tree (DT) classifiers are used for classifying the hand gestures. In recognition task, these classifiers are applied independently on the same set of hand gestures to check the optimality of recognition rate and recognition time. With the detailed experimentation, it is found that, the KNN classifier achieved an average recognition rate and average recognition time of 92.71% and 0.48 s per gestures. This recognition rate and time is better and optimal compared to M-SVM and DT classifiers. Also it is an inspiring result compared to state of art techniques in real time environment by considering various invariants.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call