Abstract

In this article, we work on generating fashion style images with deep neural network algorithms. Given a garment image, and single or multiple style images (e.g., flower, blue and white porcelain), it is a challenge to generate a synthesized clothing image with single or mix-and-match styles due to the need to preserve global clothing contents with coverable styles, to achieve high fidelity of local details, and to conform different styles with specific areas. To address this challenge, we propose a fashion style generator (FashionG) framework for the single-style generation and a spatially constrained FashionG (SC-FashionG) framework for mix-and-match style generation. Both FashionG and SC-FashionG are end-to-end feedforward neural networks that consist of a generator for image transformation and a discriminator for preserving content and style globally and locally. Specifically, a global-based loss is calculated based on full images, which can preserve the global clothing form and design. A patch-based loss is calculated based on image patches, which can preserve detailed local style patterns. We develop an alternating patch-global optimization methodology to minimize these losses. Compared with FashionG, SC-FashionG employs an additional spatial constraint to ensure that each style is blended only onto a specific area of the clothing image. Extensive experiments demonstrate the effectiveness of both single-style and mix-and-match style generations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call