Abstract

Conditional adversarial networks can be used as a solution to non labeled image-to-image translation problems, where the image is not identified as objects. These networks use a loss function to learn the mapping from input image to output image. This makes it possible to apply the same generic approach to problems that traditionally would require very difficult loss formulations and large image datasets. This paper demonstrates that our approach of using conditional adversarial networks to effectively synthesize images from label maps and reconstructing objects to latent space maps. It's applicability and ease of adoption without the need for parameter tuning into a video output by producing it frame by frame. This paper also contributes with reasonable results can be achieved without hand-engineering of loss functions in an adversarial network manually, with less latency and high throughput.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call