Abstract

AbstractHand gestures are the nonverbal communication done by individuals who cannot represent their thoughts in form of words. It is mainly used during human‐computer interaction (HCI), deaf and mute people interaction, and other robotic interface applications. Gesture recognition is a field of computer science mainly focused on improving the HCI via touch screens, cameras, and kinetic devices. The state‐of‐art systems mainly used computer vision‐based techniques that utilize both the motion sensor and camera to capture the hand gestures in real‐time and interprets them via the usage of the machine learning algorithms. Conventional machine learning algorithms often suffer from the different complexities present in the visible hand gesture images such as skin color, distance, light, hand direction, position, and background. In this article, an adaptive weighted multi‐scale resolution (AWMSR) network with a deep embedded hybrid convolutional neural network and long short term memory network (hybrid CNN‐LSTM) is proposed for identifying the different hand gesture signs with higher recognition accuracy. The proposed methodology is formulated using three steps: input preprocessing, feature extraction, and classification. To improve the complex visual effects present in the input images, a histogram equalization technique is used which improves the size of the gray level pixel in the image and also their occurrence probability. The multi‐block local binary pattern (MB‐LBP) algorithm is employed for feature extraction which extracts the crucial features present in the image such as hand shape structure feature, curvature feature, and invariant movements. The AWMSR with the deep embedded hybrid CNN–LSTM network is applied in the two‐benchmark datasets namely Jochen Triesch static hand posture and NUS hand posture dataset‐II to detect its stability in identifying different hand gestures. The weight function of the deep embedded CNN‐LSTM architecture is optimized using the puzzle optimization algorithm. The efficiency of the proposed methodology is verified in terms of different performance evaluation metrics such as accuracy, loss, confusion matrix, Intersection over the union, and execution time. The proposed methodology offers recognition accuracy of 97.86% and 98.32% for both datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call