Abstract

In the area of human–computer interaction (HCI) and computer vision, gesture recognition has always been a research hotspot. With the appearance of depth camera, gesture recognition using RGB-D camera has gradually become mainstream in this field. However, how to effectively use depth information to construct a robust gesture recognition system is still a problem. In this paper, an RGB-D static gesture recognition method based on fine-tuning Inception V3 is proposed, which can eliminate the steps of gesture segmentation and feature extraction in traditional algorithms. Compared with general CNN algorithms, the authors adopt a two-stage training strategy to fine-tune the model. This method sets a feature concatenate layer of RGB and depth images in the CNN structure, using depth information to promote the performance of gesture recognition. Finally, on the American Sign Language (ASL) Recognition dataset, the authors compared their method with other traditional machine learning methods, CNN algorithms, and the RGB input only method. Among three groups of comparative experiments, the authors’ method reached the highest accuracy of 91.35%, reaching the state-of-the-art currently on ASL dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.