Abstract

Indian sign language is communicating language among deaf and dumb people of India. Hand gestures are broadly used as communication gestures among various forms of gesture. The real time classification of different signs is a challenging task due to the variation in shape and position of hands as well as due to the variation in the background which varies from person to person. There seems to be no availability of datasets resembling to Indian signs which poses a problem to the researcher. To address this problem, we design our own dataset which is formed by incorporating 1000 signs for the sign digits from 1 to 10 from 100 different people with varying backgrounds conditions by changing colour, and light illumination situations. The dataset comprises of the signs from left handed as well as right handed people. Feature extraction methodologies are studied and applied to recognition of Sign language. This paper focuses on deep learning CNN (convolution neural network) approach with pretrained model Alexnet for calculation of feature vector. Multiple SVM (Support Vector Machine) is applied to classify Indian sign language in real time surroundings. This paper also shows the comparative analysis between Deep learning feature extraction method with histogram of gradient, bag of feature and Speed up robust feature extraction method. The experimental results shown that Deep learning feature extraction using pretrained Alexnet model give accuracy of around 85% and above for the recognition of signed digit with the use of 60% training set and 40% testing set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call