Abstract
We present a new solution to egocentric 3D body pose estimation from monocular images captured from a downward looking fish-eye camera installed on the rim of a head mounted virtual reality device. This unusual viewpoint leads to images with unique visual appearance, characterized by severe self-occlusions and strong perspective distortions that result in a drastic difference in resolution between lower and upper body. We propose a new encoder-decoder architecture with a novel multi-branch decoder designed specifically to account for the varying uncertainty in 2D joint locations. Our quantitative evaluation, both on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric pose estimation approaches. To tackle the severe lack of labelled training data for egocentric 3D pose estimation we also introduced a large-scale photo-realistic synthetic dataset. xR-EgoPose offers 383K frames of high quality renderings of people with diverse skin tones, body shapes and clothing, in a variety of backgrounds and lighting conditions, performing a range of actions. Our experiments show that the high variability in our new synthetic training corpus leads to good generalization to real world footage and to state of the art results on real world datasets with ground truth. Moreover, an evaluation on the Human3.6M benchmark shows that the performance of our method is on par with top performing approaches on the more classic problem of 3D human pose from a third person viewpoint.
Highlights
THE advent of xR technologies has led to a wide variety of applications in areas such as entertainment, communication, medicine, CAD design, art, and workspace productivity
We describe related work on monocular marker-less 3D human pose estimation focusing on two distinct capture setups: outside-in approaches where an external camera viewpoint is used to capture one or more subjects from a distance – the most commonly used setup; and first person or egocentric systems where a headmounted camera observes the own body of the user
The training-set of our xR-EgoPose dataset has been used to retrain the model of Martinez et al This way we can directly compare the performance of the 2D to 3D modules
Summary
THE advent of xR technologies (such as AR, VR, and MR) has led to a wide variety of applications in areas such as entertainment, communication, medicine, CAD design, art, and workspace productivity. These technologies mainly focus on immersing the user in a virtual space using a head mounted display (HMD) which renders the environment from the specific viewpoint of the user. Manuscript received 21 Feb. 2020; revised 8 Aug. 2020; accepted 5 Oct. 2020. 0000; date of current version 0 . (Corresponding author: Denis Tome.) Recommended for acceptance by M.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Pattern Analysis and Machine Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.