Abstract
Image-based garment transfer systems aim to swap the desired clothes from a model to arbitrary users. However, existing works cannot provide the capacity for users to try on various fashion articles according to their wishes, i.e., users can decide which article (e.g., tops, pants or both) to be swapped. In this paper, we propose an Inpainting-based Virtual Try-On Network (I-VTON) which allows the user to try on arbitrary clothes from the model image in a selective manner. To realize the selectivity, we reshape the virtual try-on as a task of image inpainting. Firstly, the texture from the garment and the user are extracted respectively to form a coarse result. In this phase, users can decide which clothes they hope to try on via an interactive texture control mechanism. Secondly, the missing regions in the coarse result are recovered via a Texture Inpainting Network (TIN). We introduce a triplet training strategy to ensure the naturalness of the final result. Qualitative and quantitative experimental results demonstrate that I-VTON outperforms the state-of-the-art methods on both the garment details and the user identity. It is also confirmed our approach can flexibly transfer the clothes in a selective manner.
Highlights
Recent researches on image synthesis [1]–[4] bring an interesting virtual try-on technology, namely image-based garment transfer [5], [6]
To solve the aforementioned challenges, we propose an end-to-end framework called Inpainting-based Virtual Try-On Network (I-VTON) to transfer arbitrary clothes from the model to the user in a selective manner
We propose a novel framework named Inpainting-based Virtual Try-On Network (I-VTON), which enables a user to try on different pieces of clothing in a selective manner
Summary
Recent researches on image synthesis [1]–[4] bring an interesting virtual try-on technology, namely image-based garment transfer [5], [6]. Imagine a female/male customer wants to know the compatibility of her/his own pants against the model’s top, only her/his top needs to be swapped. Han et al [9] and Wang et al [10] both required in-shop clothes images, which are inconvenient for different types of clothes since prior knowledge about dressing is needed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.