Abstract
Most of the existing deep learning-based methods for Mark-less human pose estimation from a single depth map are based on a common framework that takes a 2D depth map and directly regresses the 3D coordinates of human body joints via 2D convolutional neural networks (CNNs). But depth map is intrinsically 3D data, treat it as 2D images will distort the shape of the actual object through projection from 3D to 2D space, and compels the network to perform perspective distortion-invariant estimation. Moreover, directly regressing 3D coordinates from a 2D image is a highly nonlinear mapping, which causes difficulty in learning procedure. To overcome these problems, a module called Supervised Endecoder is proposed to process 3D convolution data, which can also be stacked through series connection to adapt different size of dataset. Based on the module, a network called Supervised High Dimension Endecoder Network is designed, which can be used to predict key points of markless human in a single depth map in 3D space. Experiments show improved prediction accuracy compared to the state-of-the-art approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.