Abstract

A virtual try-on network has gradually become a popular topic in recent years. It aims to transfer images of in-shop clothes onto the image of a target person. Owing to the diversity of clothing attributes, developing an image-based virtual try-on network is a complicated task for computers to perform and requires significant effort. Existing methods are unsatisfactory as they cannot preserve the characteristics of the clothes or the target person's identity well, thereby affecting the perception of the generated images; therefore, further research is required. To address this problem, we propose a novel try-on method that combines attribute transformation and local rendering. First, we employ pixel-level semantic segmentation to identify the try-on area and provide implementation conditions for local rendering. Second, we construct a learnable attribute transformation module to complete the try-on task for different attributes. Third, we use a learnable clothing warping module to fit the pose and figure of the target person well and establish a novel loss function, called modified style loss (M-SL), to handle clothes with rich details. Finally, we adopt a local rendering strategy, using which only renders the clothing area to ensure that the details of the non-target area are not lost. Extensive experiments are performed to test our method. The results demonstrate that our method outperforms other state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.