Abstract
Training instability in generative adversarial networks (GANs) remains one of the most challenging problems, for which both the theoretical root and an effective solution are needed. In this study, we theoretically determined that the mutual contradiction between training the optimal discriminator and minimizing the generator leads to training instability in GANs. To address this problem, we propose a targeted gradient penalty technique. Unlike other penalty techniques, we penalize the Lipschitz constant of the discriminator, which is the key to dealing with the instability problem (this amounts to controlling the Lipschitz constant of the discriminator). We performed a series of experimental comparisons from three different perspectives: the oscillation amplitude of the loss function (convergence), the general variation trend of the gradient, and the holistic performance of the network. The results demonstrated that the proposed technique has a significant and positive effect on the training instability in GANs.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have