Abstract

Generative adversarial networks (GANs) are generative models based on game theory. Because the relationship between generator and discriminator must be carefully adjusted during the training process, it is difficult to get stable training. Although some solutions are proposed to alleviate this issue, it is still necessary to discuss how to improve the stability of GANs. We propose a GAN we call the discarding fake samples (DFS)-GAN. During the training process, some generated samples are unable to fool the discriminator and provide a relatively invalid gradient for the discriminator. So, in the stabilized discriminator module (SDM), we discard the fake but easily discriminated samples. At the same time, we propose a new loss function, SGAN-gradient penalty 1. We explain the rationale of SDM and our loss function from a Bayesian decision perspective. We inferred the best number of discarded fake samples and verified the selected parameters’ effectiveness by experiments. The Fréchet inception distance (FID) value of DFS-GAN is 14.57 ± 0.19 on Canadian Institute for Advanced Research-10 (CIFAR-10), 20.87 ± 0.33 on CIFAR-100, and 92.42 ± 0.43 on ImageNet, which is lower than that of the current optimal method. Moreover, SDM module can be used in many GANs to decrease the FID value if their loss functions fit.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call