Abstract
We live in a 3D world where people interact with each other in the environment. Learning 3D posed humans therefore requires us to perceive and interpret these interactions. This paper proposes LEAPSE, a novel method that learns salient instance affordances for estimating a posed body from a single RGB image in a non-parametric manner. Existing methods mostly ignore the environment and estimate the human body independently from the surroundings. We capture the influences of non-contact and contact instances on a posed body as an adequate representation of the "environment affordances". The proposed method learns the global relationships between 3D joints, body mesh vertices, and salient instances as environment affordances on the human body. LEAPSE achieved state-of-the-art results on the 3DPW dataset with many affordance instances, and also demonstrated excellent performance on Human3.6M dataset. We further demonstrate the benefit of our method by showing that the performance of existing weak models can be significantly improved when combined with our environment affordance module.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.