Abstract

Generative adversarial network (GAN) is the most important natural image synthesis method, and one of GAN major challenges is the adversarial learning stability. Usually, GAN directly optimizes a single divergence via the two-player zero-sum game of generator and discriminator. Because the parameter updating of generator and discriminator is complex and changeable, directly optimizing the single divergence is easy to fall into local optimal solution, which affects the adversarial learning stability. To improve the adversarial learning stability and convergence accuracy, the same solution constraints GAN (SSC-GAN) is proposed. In this novel GAN, a first-order ordinary differential equation (ODE) is constructed by training set probability density function, which is the same solution problem of GAN optimization problem, and its constraint conditions are given to ensure the existence and uniqueness of the ODE solution. These constraints are used to guide the GAN parameter updating to improve the adversarial learning stability, thereby getting better training effects. In CELEBA and CIFAR10, the experimental results show that the novel GAN is better than the classic GAN models. In CELEBA, CIFAR10 and LSUN-BEDROOM, the proposed SSC-GAN is also better than two state-of-the-art GAN models, SAGAN and SNGAN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call