Abstract

Recovering 3D hand mesh from a monocular RGB image has a wide range of application scenarios such as VR/AR. The parametric hand model provides a good geometric prior to the shape of hand, and is commonly used to recover the 3D hand mesh. However, the rotation parameters of hand model are not easy to learn, which influences the accuracy of model-based methods. To address this problem, we take advantage of the inverse kinematic chains of hand to derive an analytical method, which can convert hand joint locations into rotation parameters. By integrating such analytical method into the neural network, we propose an end-to-end learnable model named IKHand to recover the 3D hand mesh. IKHand comprises detection module and mesh generation module. Detection module predicts the 3D hand keypoints while mesh generation module takes these keypoints to generate the 3D hand mesh. Experimental results show that our proposed method can generate impressive and robust 3D hand meshes under several challenging conditions, and can achieve competitive results on FreiHAND dataset as well as RHD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call