Abstract

PurposeThe paper aims to transfer the item image of a given clothing product to a corresponding area of the user image. Existing classical methods suffer from unconstrained deformation of clothing and occlusion caused by hair or poses, which leads to loss of details in the try-on results. In this paper, the authors present a details-oriented virtual try-on network (DO-VTON), which allows synthesizing high-fidelity try-on images with preserved characteristics of target clothing.Design/methodology/approachThe proposed try-on network consists of three modules. The fashion parsing module (FPM) is designed to generate the parsing map of a reference person image. The geometric matching module (GMM) warps the input clothing and matches it with the torso area of the reference person guided by the parsing map. The try-on module (TOM) generates the final try-on image. In both FPM and TOM, attention mechanism is introduced to obtain sufficient features, which enhances the performance of characteristics preservation. In GMM, a two-stage coarse-to-fine training strategy with a grid regularization loss (GR loss) is employed to optimize the clothing warping.FindingsIn this paper, the authors propose a three-stage image-based virtual try-on network, DO-VTON, that aims to generate realistic try-on images with extensive characteristics preserved.Research limitations/implicationsThe authors’ proposed algorithm can provide a promising tool for image based virtual try-on.Practical implicationsThe authors’ proposed method is a technology for consumers to purchase favored clothes online and to reduce the return rate in e-commerce.Originality/valueTherefore, the authors’ proposed algorithm can provide a promising tool for image based virtual try-on.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call