Abstract

A hand gesture recognition system has a wide area of application in human computer interaction (HCI) and sign language. This work proposes a vision-based system for recognition of static hand gesture. It deals with images of bare hands, and allows to recognize gesture in illumination, rotation, position and size variation of gesture images. The proposed system consists of three phases: preprocessing, feature extraction and classification. The preprocessing phase involves image enhancement, segmentation, rotation and filtering process. To obtain a rotation invariant gesture image, a novel technique is proposed in this paper by coinciding the 1 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">st</sup> principal component of the segmented hand gestures with vertical axes. In feature extraction phase, this work extracts localized contour sequences (LCS) and block based features and proposes a novel mixture of features (or combined features) for better representation of static hand gesture. The combined features are applied as input to multiclass support vector machine (SVM) classifier to recognize static hand gesture. The proposed system is implemented and tested on three different hand alphabet databases. The experimental results show that the proposed system able to recognize static gesture with a recognition sensitivity of 99.50%, 93.58% and 98.33% for database I, database II and database III respectively which are better compared to earlier reported methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call