Abstract

Gesture recognition as a natural, convenient and recognizable way has been received more and more attention on human–machine interaction (HMI) recently. However, visual-based gesture recognition methods are often restricted by environments and classical wearable device-based strategies are suffered from relatively low accuracy or the complicated structures. In this study, we first design a low-cost and efficient data glove with simple hardware structure to capture finger movement and bending simultaneously. Second, a novel dynamic hand gesture recognition algorithm (DGDL-GR) is proposed to recognize human dynamic sign language, in which a fusion model of convolutional neural network (fCNN) and generic temporal convolutional network (TCN) is fully utilized. The fCNN (fusion of 1-D CNN and 2-D CNN) is proposed to extract time-domain features of finger resistance movement and spatial domain features of finger resistance bending simultaneously. Moreover, due to the superiorities of TCN in sequence modeling task, this work proposes a novel hand gesture recognition method based on the TCN, which includes causal convolution, dilation convolution, and a residual network with appropriate layers. Both long- and short-time dependencies of the hand gesture features are deeply mined and classified in the end. Results of extensive experiments have demonstrated that the proposed DGDL-GR algorithm outperforms many state-of-the-art algorithms on the measure of accuracy, F1 score, precision score, and recall score with the real-world dataset. Moreover, the number of residual blocks and some key hyperparameters of the proposed DGDL-GR algorithm has been studied thoroughly in this work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call