Abstract

In this paper, we consider a novel problem of reconstructing a 3D clothed human avatar from multiple frames, independent of assumptions on camera calibration, capture space, and constrained actions. We contribute a large-scale dataset, Multi-View and multi-Pose 3D human (MVP-Human in short) to help address this problem. The dataset contains 400 subjects, each of which has 15 scans in different poses and 8-view images for each pose, providing 6,000 3D scans and 48,000 images in total. In addition, a baseline method that takes multiple images as inputs, and generates a shape-with-skinning avatar in the canonical space, finished in one feed-forward pass is proposed. It first reconstructs the implicit skinning fields in a multi-level manner, and then the image features from multiple images are aligned and integrated to estimate a pixel-aligned implicit function that represents the clothed shape. With the newly collected dataset and the baseline method, it shows promising performance on 3D clothed avatar reconstruction. We release the MVP-Human dataset and the baseline method in, hoping to promote research and development in this field.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.