Abstract

This paper proposes Attribute-Decomposed GAN (ADGAN) and its enhanced version (ADGAN++) for controllable image synthesis, which can produce realistic images with desired attributes provided in various source inputs. The core ideas of the proposed ADGAN and ADGAN++ are both to embed component attributes into the latent space as independent codes and thus achieve flexible and continuous control of attributes via mixing and interpolation operations in explicit style representations. The major difference between them is that ADGAN processes all component attributes simultaneously while ADGAN++ utilizes a serial encoding strategy. More specifically, ADGAN consists of two encoding pathways with style block connections and is capable of decomposing the original hard mapping into multiple more accessible subtasks. In the source pathway, component layouts are extracted via a semantic parser and the segmented components are fed into a shared global texture encoder to obtain decomposed latent codes. This strategy allows for the synthesis of more realistic output images and the automatic separation of un-annotated component attributes. Although the original ADGAN works in a delicate and efficient manner, intrinsically it fails to handle the semantic image synthesizing task when the number of attribute categories is huge. To address this problem, ADGAN++ employs the serial encoding of different component attributes to synthesize each part of the target real-world image, and adopts several residual blocks with segmentation guided instance normalization to assemble the synthesized component images and refine the original synthesis result. The two-stage ADGAN++ is designed to alleviate the massive computational costs required when synthesizing real-world images with numerous attributes while maintaining the disentanglement of different attributes to enable flexible control of arbitrary component attributes of the synthesized images. Experimental results demonstrate the proposed methods' superiority over the state of the art in pose transfer, face style transfer, and semantic image synthesis, as well as their effectiveness in the task of component attribute transfer. Our code and data are publicly available at https://github.com/menyifang/ADGAN.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.