Abstract

Generative adversarial networks (GANs) have attracted in-tense interest in the field of generative models. This paper will first theoretically analyze GANs’ approximation property. Similar to the universal approximation property of the fully connected neural networks with one hidden layer, we prove that the generator with the input latent variable in GANs can universally approximate the potential data distribution given the increasing hidden neurons. Furthermore, we propose an approach named stochastic data generation (SDG) to enhance GANs’ approximation ability. Our approach is based on the simple idea of imposing randomness through data generation in GANs by a prior distribution on the conditional probability between the layers. The experimental results on synthetic dataset verify the improved approximation ability obtained by this SDG approach. In the practical dataset, three GANs using SDG can also outperform the corresponding traditional GANs when the model architectures are smaller.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call