Abstract

3D hand pose estimation from a monocular RGB image is a highly challenging task due to self-occlusion, diverse appearances, and inherent depth ambiguities within monocular images. Most of the previous methods first employ deep neural networks to fit 2D joint location maps, then combines them with implicit or explicit pose-aware features to directly regress 3D hand joints positions using their designed network structure. However, the skeleton positions and corresponding skeleton-aware content information located in the latent space are invariably ignored. These skeleton-aware contents effectively bridge the gap between hand joint and hand skeleton information by associating the relationship between different hand joints features and the hand skeleton positions distribution in 2D space. To address this issue, we propose a simple yet efficient deep neural network to directly recover reliable 3D hand pose from monocular RGB images with faster estimation process. Our purpose is the reduction of the model computational complexity while maintaining high precision performance. Therefore, we design a novel Feature Chat Block (FCB) to complete feature boosting, which enables the intuitively enhanced interaction between joint and skeleton features. First, this FCB module updates joint features effectively based on semantic graph convolutional neural network and multi-head self-attention mechanism. The GCN-based structure focuses on the physical hand joints included in a binary adjacency matrix and the self-attention part pays attention to hand joints located in a complementary matrix. Then, the FCB module employs query and key mechanisms respectively representing joint and skeleton features to further implement feature interaction. After a set of FCB modules, our model updates the fused features in a coarse-to-fine manner and finally outputs a predicted 3D hand pose. We conducted a comprehensive set of ablation experiments on the InterHand2.6M dataset to validate the effectiveness and significance of the proposed method. Additionally, experimental results on Rendered Hand Dataset, Stereo Hand Datasets, First-Person Hand Action Dataset and FreiHAND Dataset show our model surpasses the state-of-the-art methods with faster inference speed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.