Abstract

Synthesizing realistic images has been a challenge in machine learning, due to images being complex and high dimensional, thus making them hard to model well. Building on the recent progress made in both, generative multi adversarial nets (GMAN) and conditional generative adversarial nets (CGAN), this research aims at introducing a new method to improve image synthesis in generative adversarial networks (GAN). The research benefits from combining the best of both techniques to build a model (Hybrid-GAN) that produces higher images quality, which is hardly distinguished from real images. Furthermore, this model significantly enhances log-likelihood of test data under the conditional distributions. To validate the results, we have conducted a detailed comparison between images generated by our new model, Hybrid-GAN and those images produced by standard GANs. We execute the new model using MNIST dataset and demonstrated the results obtained from the generating task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call