Abstract
Human pose estimation in real-time is a challenging problem in computer vision. In this paper, we present a novel approach to recover a 3D human pose in real-time from a single depth human silhouette using Principal Direction Analysis (PDA) on each recognized body part. In our work, the human body parts are first recognized from a depth human body silhouette via the trained Random Forests (RFs). On each recognized body part which is presented as a set of 3D points cloud, PDA is applied to estimate the principal direction of the body part. Finally, a 3D human pose gets recovered by mapping the principal directional vector to each body part of a 3D human body model which is created with a set of super-quadrics linked by the kinematic chains. In our experiments, we have performed quantitative and qualitative evaluations of the proposed 3D human pose reconstruction methodology. Our evaluation results show that the proposed approach performs reliably on a sequence of unconstrained poses and achieves an average reconstruction error of 7.46 degree in a few key joint angles. Our 3D pose recovery methodology should be applicable to many areas such as human computer interactions and human activity recognition.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.