Parametric and non-parametric methods are two commonly used strategies in current 3D hand pose reconstruction. Parametric methods predict low-dimensional parameters to fit a predefined hand model to the input image. Benefiting from the prior knowledge of hand models, parametric methods guarantee plausible hand poses, whereas the pose estimation accuracy is limited due to nonlinear regression and spatial information loss. Differently, non-parametric methods directly estimate the coordinates of keypoints or mesh vertices from the input image. The reconstructed 3D hand poses show high precision but may be less robust. In this paper, we integrate the advantages of two methods for accurate and robust hand pose reconstruction. Specifically, we disentangle the hand pose reconstruction into global modeling and local refinement, which is performed in a coarse-to-fine manner. Firstly, we utilize global features from the encoder to generate the initial estimation by a parametric method, which aims to provide the prior knowledge of the human hand for subsequent processes. Then, we gradually fuse multi-scale contextual features for local refinement by explicitly integrating global prior information and local visual features. In particular, we introduce a consecutive pixel-aligned feature retrieval module to extract fine-grained information from visual features, thereby achieving pixel-level alignment. Furthermore, we demonstrate that our method can be extended to weakly-supervised learning where only sparse pose annotations are needed, potentially alleviating the burden of meticulous mesh annotation. The effectiveness and robustness of our method are substantiated through both fully- and weakly-supervised experiments, demonstrating superior performance compared to state-of-the-art methods. We plan to release our code at https://github.com/Kun-Gao/P_GLFnet.
Read full abstract