Abstract
Generative adversarial networks (GANs) are generative models based on game theory. Because the relationship between generator and discriminator must be carefully adjusted during the training process, it is difficult to get stable training. Although some solutions are proposed to alleviate this issue, it is still necessary to discuss how to improve the stability of GANs. We propose a GAN we call the discarding fake samples (DFS)-GAN. During the training process, some generated samples are unable to fool the discriminator and provide a relatively invalid gradient for the discriminator. So, in the stabilized discriminator module (SDM), we discard the fake but easily discriminated samples. At the same time, we propose a new loss function, SGAN-gradient penalty 1. We explain the rationale of SDM and our loss function from a Bayesian decision perspective. We inferred the best number of discarded fake samples and verified the selected parameters’ effectiveness by experiments. The Fréchet inception distance (FID) value of DFS-GAN is 14.57 ± 0.19 on Canadian Institute for Advanced Research-10 (CIFAR-10), 20.87 ± 0.33 on CIFAR-100, and 92.42 ± 0.43 on ImageNet, which is lower than that of the current optimal method. Moreover, SDM module can be used in many GANs to decrease the FID value if their loss functions fit.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.