Abstract

Synthesizing images with fine details from text descriptions is a challenge. The existing single-stage generative adversarial networks (GANs) fuse sentence features into the image generation process through affine transformation, which alleviate the problems of missing details and large computation from stacked networks. However, existing single-stage networks ignore the word features in the text description, resulting in a lack of detail in the generated image. To address this issue, we proposed a text aggregation module (TAM) to fuse sentence features and word features in a text by a simple spatial attention mechanism. Then we built a text connection fusion (TCF) block consisting mainly of gated recurrent unit (GRU) and up-sampled block. It can connect text features used in the up-sampled blocks to improve text utilization. Besides, to further improve the semantic consistency between text and the generated images, we introduce the deep attentional multimodal similarity model (DAMSM) loss, which monitors the similarity between text and improves semantic consistency. Experimental results prove that our method is superior to the state-of-the-art models on the CUB and COCO datasets, regarding both image fidelity and semantic consistency with the text.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call