Abstract

The traditional fashion industry is heavily dependent on designers whose talent and vision have a significant impact on their innovative designs. Through taking advantage of recent advances in image-to-image translation by generative adversarial networks (GANs), marked improvement in designers efficiency is now possible. Considering both randomness and controllability in the design process, this article presents a novel artificial intelligence (AI)-based framework for fashion design. Under this framework, a sketch-generation module which is based on latent space is firstly introduced for designing various sketches. Secondly, a rendering-generation module is proposed to learn mapping between textures and sketches to complete the task of fashion design. In order to achieve effectiveness in synthesizing semantic-aware textures on sketches, a multi-conditional feature interaction module is developed in the rendering-generation model. Moreover, two different training schemes are introduced to optimize both the sketch-generation module and the rendering-generation module. In order to evaluate the performance of our proposed models, we built a large-scale dataset which consists of 115,584 pairs of fashion item images. Experimental results demonstrate the effectiveness of our proposed method, and indicate that our model can facilitate designers design process by taking full advantage of the controllability of different conditions (e.g., sketch and texture) and the randomness of latent space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call