Abstract
AbstractThree-dimensional human pose estimation models are conventionally based on RGB images or by assuming that accurately-estimated (near to ground truth) 2D human pose landmarks are available. Naturally, such data only contains information about two dimensions, while the 3D poses require the three dimensions of height, width, and depth. In this paper, we propose a new 3D human pose estimation model that takes an estimated 2D pose and the depthmap of the 2D pose as input to estimate 3D human pose. In our system, the estimated 2D pose is obtained from processing an RGB image using a 2D landmark detection network that produces noisy heatmap data. We compare our results with a Simple Linear Model (SLM) of other authors that takes accurately-estimated 2D pose landmarks as input and that has reached the state-of-the-art results for 3D human pose estimate using the Human3.6m dataset. Our results show that our model can achieve better performance than the SLM, and that our model can align the 2D landmark data with the depthmap automatically. We have also tested our network using estimated 2D poses and depthmaps separately. In our model, all three conditions (depthmap+2D pose, depthmap-only and 2D pose-only) are more accurate than the SLM with, surprisingly, the depthmap-only condition being comparable in accuracy with the depthmap+2D pose condition.Keywords3D Pose EstimationConvolution Neural NetworkDepthmap
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.