Abstract

AbstractSkeleton-based hand gesture recognition has achieved great success in recent years. However, most of the existing methods cannot extract spatiotemporal features well due to the skeleton noise. In real applications, some large models also suffer from a huge number of parameters and low execution speed. This paper presents a lightweight skeleton-based hand gesture recognition network by using multi-input fusion to address those issues. We convey two joint-oriented features: Center Joint Distances (CJD) feature and Center Joint Angles (CJA) feature as the static branch. Besides, the motion branch consists of Global Linear Velocities (GLV) feature and Local Angular Velocities (LAV) feature. Fusing static and motion branches, a robust input can be generated and fed into a lightweight CNN-based network to recognize hand gestures. Our method achieves 95.8% and 92.5% hand gesture recognition accuracy with only 2.24M parameters on the 14 gestures and 28 gestures of the SHREC’17 dataset. Experimental results show that the proposed method outperforms state-of-the-art (SOAT) methods.KeywordsSkeleton-based hand gesture recognitionMulti-input fusionJoint-oriented feature Second Keyword

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.