Abstract

Abstract In this paper, we investigate 3D hand pose estimation using single depth images. On the one hand, accurate hand localization is a crucial factor for pose estimation. On the other hand, multi-task learning methods have achieved great success in visual recognition tasks. Therefore, in this paper we proposed to simultaneously detect the hand location and estimate its 3D pose in a multi-task learning framework. We used 3D region proposal for 3D pose estimation, which searches possible hand locations in the 3D space. In the experimental part, the proposed method is evaluated on several benchmark datasets and shown it is comparable to most existing 3D hand pose estimation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call