Abstract
In recent years, Generative Adversarial Network (GAN) has quickly become the most popular deep generative model framework, and it is also the most popular topic in the current deep learning research field. Although the generative adversarial network has achieved remarkable results from text description to image generation, when a complex image containing multiple objects, the position of each object will be blurred and overlapped, and the edges of the generated image will be blurred and local textures will be unclear. Usually given text description can generate the corresponding rough image, but there are still some problems in the image details. In order to solve the above problems, on the basis of Stack GAN, a scene graph-based stacked generative confrontation network model (Scene graph stack GAN, SGS-GAN) is proposed, which converts the text description into The scene graph uses the scene graph as the condition vector and inputs the random noise into the generator model to obtain the result image. The experimental results show that the Inception store of the SGS-GAN model on the Visual Genome and COCO data sets reached 6.64 and 6.52, respectively, which were increased by 0.212 and 0.219 compared to Sg2Im. This proves that the diversity and vividness of the generated samples and the sharpness of the image are obviously improved after the number of times of training and the input of the scene graph.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.