Abstract

Hand gesture recognition and hand pose estimation are two closely correlated tasks. In this paper, we propose a deep-learning based approach which jointly learns an intermediate level shared feature for these two tasks, so that the hand gesture recognition task can be benefited from the hand pose estimation task. In the training process, a semi-supervised training scheme is designed to solve the problem of lacking proper annotation. Our approach detects the foreground hand, recognizes the hand gesture, and estimates the corresponding 3D hand pose simultaneously. To evaluate the hand gesture recognition performance of the state-of-the-arts, we propose a challenging hand gesture recognition dataset collected in unconstrained environments. Experimental results show that, the gesture recognition accuracy of ours is significantly boosted by leveraging the knowledge learned from the hand pose estimation task.

Highlights

  • People interact with each other using hand gestures in everyday life

  • We proposed a challenging hand gesture recognition dataset captured in unconstrained environment, and the dataset can be used to evaluate the performance of the state-of-theart

  • We propose a hand gesture recognition approach by joint learning a shared feature for hand gesture recognition and hand pose estimation tasks

Read more

Summary

Introduction

People interact with each other using hand gestures in everyday life. Hand gesture recognition is an important research topic which has a wide range of applications, such as robotics, human-computer interaction, assistant driving, and so on. We propose a deep-learning based approach which effectively transfers the hand pose estimation knowledge to the hand gesture recognition task by joint learning an intermediate level shared feature. Existing datasets focus on either hand gesture recognition or hand pose estimation, and it is difficult to find a dataset which contains both these two types of annotations To tackle this problem, a semi-supervised training scheme is designed to extract the shared feature from hand images with only hand gesture annotation or hand pose annotation. A semi-supervised training scheme is designed to extract the shared feature from hand images with only hand gesture annotation or hand pose annotation In this manner, the hand pose estimation knowledge learned from the hand pose estimation dataset can be transferred to the hand gesture recognition task. The dataset and the related code will be released on https://github.com/waterai12/CUG-Hand-Gesture

Related Work
Methods
Foreground Hand Detection
Shared Feature Extraction
Hand Gesture Recognition
Hand Pose Estimation
Hand Image Reconstruction
Semi-Supervised Learning
CUG-Hand Dataset
Experimental Setting
Gesture Recognition on LaRED Dataset
Gesture Recognition on CUG-Hand Dataset
Gesture Detection on CUG-Hand Dataset
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call