Abstract

AbstractSynthesis of a realistic image from matching visual descriptions provided in the textual format is a challenge that has attracted attention in the recent research community in the field of artificial intelligence. Generation of the image from given text input is a problem, where given a text input, an image which matches text description must be generated. However, a relatively new class of convolutional neural networks referred to as generative adversarial networks (GANs) has provided compelling results in understanding textual features and generating high-resolution images. In this work, the main aim is to generate an automobile image from the given text input using generative adversarial networks and manipulate automobile colour using text-adaptive discriminator. This work involves creating a detailed text description of each image of a car to train the GAN model to produce images. KeywordsGenerative adversarial networksGeneratorDiscriminatorGANSText to image synthesis

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call