Abstract
To reconstruct a 3D human surface from a single image, it is crucial to simultaneously consider human pose, shape, and clothing details. Recent approaches have combined parametric body models (such as SMPL), which capture body pose and shape priors, with neural implicit functions that flexibly learn clothing details. However, this combined representation introduces additional computation, e.g. signed distance calculation in 3D body feature extraction, leading to redundancy in the implicit query-and-infer process and failing to preserve the underlying body shape prior. To address these issues, we propose a novel IUVD-Feedback representation, consisting of an IUVD occupancy function and a feedback query algorithm. This representation replaces the time-consuming signed distance calculation with a simple linear transformation in the IUVD space, leveraging the SMPL UV maps. Additionally, it reduces redundant query points through a feedback mechanism, leading to more reasonable 3D body features and more effective query points, thereby preserving the parametric body prior. Moreover, the IUVD-Feedback representation can be embedded into any existing implicit human reconstruction pipeline without requiring modifications to the trained neural networks. Experiments on the THuman2.0 dataset demonstrate that the proposed IUVD-Feedback representation improves the robustness of results and achieves three times faster acceleration in the query-and-infer process. Furthermore, this representation holds potential for generative applications by leveraging its inherent semantic information from the parametric body model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.