Abstract

Sign language is a non-verbal communication between deaf and dumb communities that helps them to communicate or interact with other individuals. Although many sign language recognition models restrict the gap between the deaf-dumb community and normal people, there is a growing need for effective and reliable approaches to recognizing hand gestures under complex background conditions. To attain this objective, a novel modified deep convolutional neural network-based hybrid arithmetic hunger games (MDCNN-HAHG) is developed for the accurate estimation of hand gestures. The images of hand gestures are taken from two different datasets namely First-Person Hand Action (FHPA) and Dynamic Hand Gesture (DHG) dataset are utilized as input. The data distortions present in the datasets are eliminated during the preprocessing task. After, the significant features that help to accurately recognize the kind of posture are extracted in the feature extraction process. Finally, the proposed MDCNN-HAHG model efficiently recognizes and outputs the hand gestures. To improve the recognition performance of MDCNN model, its hyperparameters such as learning rate, dropout rate, size of the convolution kernel, and pooling size are adaptively tuned and reweighted using the hybrid arithmetic hunger games (HAHG) algorithm. Accuracy, precision, recall, specificity, and F-measure performance metrics are employed to estimate the performances of the MDCNN-HAHG model. The analytic result shows that the MDCNN-HAHG technique attains greater accuracy of about 97.2% for the DHG-14 dataset, 96.13% for the DHG-28 dataset, and 92.78% for the FHPA dataset in recognizing different hand gestures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call