Abstract

Traditional fashion design is heavily dependent on the talent and inspiration of designers. However, for inexperienced designers or common consumers, it is difficult to complete the design of ready-to-wear clothes. In this paper, we introduce a novel fashion attributes disentanglement generative adversarial network (FadGAN) to assist users in automatically accomplishing the design process. Different from the typical style transfer methods, which are limited to perform coarse-level style transfer, our FadGAN works on learning the accurate mapping which can preserve the identity of one fashion item, as well as learning the texture of another fashion item. Specifically, given two different fashion items, a fashion item disentanglement encoder first disentangles the features of the two inputs into structure codes and texture codes, respectively. Subsequently, a texture swap module is proposed to learn mixed texture codes to eliminate the differences of features from different fashion items. The mixed texture codes and the structure codes are then fed into a generator for new fashion item generation. Experimental results demonstrate that our model achieves superior results in comparison to several state-of-the-art methods with a 0.006-0.446 improvement on the mask-LPIPS, a 3.976-6.724 improvement on CDH, a 9.266-13.127 improvement on CIEDE, and a 0.145-0.325 improvement on HISTO.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call