Abstract

Current leading algorithms of the image-based virtual try-on systems mainly model the deformation of clothes as a whole. However, the deformation of different clothes parts can change drastically. Thus the existing algorithms fail to transfer the clothes to the proper shape in cases, such as self-occlusion, complex pose, and sophisticated textures. Based on this observation, we propose a Landmark-Guided Virtual Try-On Network (LG-VTON), which explicitly divides the clothes into regions using estimated landmarks, and performs a part-wise transformation using the Thin Plate Spline (TPS) for each region independently. The part-wise TPS transformation can be calculated according to the estimated landmarks. Finally, a virtual try-on sub-network is introduced to estimate the composition mask to fuse the wrapped clothes and person image to synthesize the try-on result. Extensive experiments on the virtual try-on dataset demonstrate that LG-VTON can handle complicated clothes deformation and synthesize satisfactory virtual try-on images, achieving state-of-the-art performance both qualitatively and quantitatively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.