Abstract

Most recent garment capturing techniques rely on acquiring multiple views of clothing, which may not always be readily available, especially in the case of pre-existing photographs from the web. As an alternative, we propose a method that is able to compute a 3D model of a human body and its outfit from a single photograph with little human interaction. Our algorithm is not only able to capture the global shape and overall geometry of the clothing, it can also extract the physical properties (i.e., material parameters needed for simulation) of cloth. Unlike previous methods using full 3D information (i.e., depth, multi-view images, or sampled 3D geometry), our approach achieves garment recovery from a single-view image by using physical, statistical, and geometric priors and a combination of parameter estimation, semantic parsing, shape/pose recovery, and physics-based cloth simulation. We demonstrate the effectiveness of our algorithm by re-purposing the reconstructed garments for virtual try-on and garment transfer applications and for cloth animation on digital characters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.