Abstract

Generative Adversarial Network (GAN) has become the dominant generative model in recent years. Although GAN is capable of generating sharp and realistic images, it faces several problems such as training instability and mode collapse. To address these issues, aside from the usual distribution matching via GAN’s adversarial training in a high-dimensional data space, we propose to perform distribution matching within a low-dimensional latent representation space as well. Such a low-dimensional latent representation space is obtained through training an Autoencoder (AE), which not only captures salient features and modes of the data distribution but can also be regularized to learn a nice latent manifold structure of the data. Based on that, we develop a novel hybrid generative model that combines AE and GAN, namely Dual Distribution Matching GAN (DM2GAN), that performs distribution matching in both data and latent space simultaneously. We theoretically show that the optimum of the proposed distribution matching constraint in the latent space is attained if and only if the generated and the real data distribution match exactly. The empirical evaluations on the 2D synthetic data, MNIST-1K, and several real-world datasets demonstrate the effectiveness of the proposed method to stabilize the training and increase mode coverage for GAN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call