Abstract
Generative Adversarial Network (GANs) has become one of the most interesting ideas in the last years in Machine Learning. Generative Adversarial Network is a very exciting area and that’s why researchers are so excited about building generative models as they are set to vary what machines can do for humans. This paper proposes the generation of realistic images according to their semantics based on text description using a Knowledge Graph alongside Knowledge Guided Generative Adversarial Network (KG-GAN) that comes with the embeddings generated from the Knowledge Graph (KG) into GAN. The Knowledge Graph is made from the text description by making the machine understand from the Natural Language Processing (NLP) techniques. The Knowledge Graph produced from the text description is converted to its embeddings by utilizing a Graph Convolutional Networks (GCN) and is fed into the GAN for generating realistic images by training the generators and discriminators and also the performance is evaluated. The experimental study is completed on a Caltech-UCSD Birds 200-2011 (CUB-200-2011) dataset and results that the approach using the knowledge graph for image generation using GAN has performed well and with high accuracy in comparison to the other established techniques generated in the past years for text to image generation in GAN.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.