Abstract

We propose a novel method to achieve robust and wrinkle-realistic clothing reconstruction from a single RGB image. Our approach successfully exploits both advantages of implicit function methods and explicit clothing generative models. The former allows to capture high-frequency wrinkle details and clothing appearance from a single image, and the latter provides reasonable clothing shape prior and structured topology. To utilize the implicit function method to obtain implicit clothing, the key is that we need to automatically segment the clothing region from the reconstructed results and solve its depth ambiguity problem. Therefore, we design a pixel-aligned segmentation method to achieve automatic and complete clothing segmentation. It relies on a lightweight clothing mask network and a simple but effective segmentation strategy. To deal with the depth ambiguity problem, a depth correction function is introduced to significantly remove the uncertainty in the depth direction and recover the correct clothing shape. Further, we utilize explicit clothing generative models to provide reasonable shapes and topologies. It first infers an initial explicit clothing template based on generative models. Then, the implicit clothing is fitted by this explicit template to capture high-frequency deformation, resulting in wrinkle-realistic and structured clothing results. Extensive experiments have proved the validity of our approach with the visual effect and geometry accuracy both reaching the state-of-the-art level in clothing reconstruction from a single image.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call