Abstract

Traditional approaches for semantic image synthesis mainly focus on text descriptions while ignoring the related structures and attributes in the original images. Therefore, some critical information, e.g., the style, backgrounds, objects shapes and pose, is missed in the generated images. In this paper, we propose a novel framework called Conditional Cycle-Generative Adversarial Network (CCGAN) to address this issue. Our model can generate photo-realistic images conditioned on the given text descriptions, while maintaining the attributes of the original images. The framework mainly consists of two coupled conditional adversarial networks, which are able to learn a desirable image mapping that can keep the structures and attributes in the images. We introduce a conditional cycle consistency loss to prevent the contradiction between two generators. This loss allows the generated images to retain most of the features of the original image, so as to improve the stability of network training. Moreover, benefiting from the mechanism of circular training, the proposed networks can learn the semantic information of the text much accurately. Experiments on Caltech-UCSD Bird dataset and Oxford-102 flower dataset demonstrate that the proposed method significantly outperforms the existing methods in terms of image details reconstruction and semantic information expression.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call