Abstract
Recovering 3D hand mesh from a monocular RGB image has a wide range of application scenarios such as VR/AR. The parametric hand model provides a good geometric prior to the shape of hand, and is commonly used to recover the 3D hand mesh. However, the rotation parameters of hand model are not easy to learn, which influences the accuracy of model-based methods. To address this problem, we take advantage of the inverse kinematic chains of hand to derive an analytical method, which can convert hand joint locations into rotation parameters. By integrating such analytical method into the neural network, we propose an end-to-end learnable model named IKHand to recover the 3D hand mesh. IKHand comprises detection module and mesh generation module. Detection module predicts the 3D hand keypoints while mesh generation module takes these keypoints to generate the 3D hand mesh. Experimental results show that our proposed method can generate impressive and robust 3D hand meshes under several challenging conditions, and can achieve competitive results on FreiHAND dataset as well as RHD.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.