Abstract

Image-based virtual try-on networks for changing the outfit of a person in an image with the desired clothes of another image have attracted increasing research interests. Previous work try to extract a clothing-agnostic person representation from the original person image and then synthesize it with the given clothes image through a try-on network. However, their body shape representation just downsamples the clothed body segmentation to a low resolution, which is too coarse and still contains noises of original clothes and may result in unrealistic artifacts. Correspondingly, we propose an SP-VITON (Shape-Preserving VIrtual Try-On Network) to keep the user’s original body shape while getting rid of the original clothes. Firstly, we augment the shape variety of the dataset and estimate the 2D shape under clothes of the person using DensePose. Then a try-on network is trained with the augmented dataset and new shape representation. Experiment results show our improvements for applying to various shapes and clothes types of the input person image, compared with the state-of-the-art image-based try-on methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.