Synthesizing high-quality and diverse samples is the main goal of generative models. Despite recent great progress in generative adversarial networks (GANs), mode collapse is still an open problem, and mitigating it will benefit the generator to better capture the target data distribution. This article rethinks alternating optimization in GANs, which is a classic approach to training GANs in practice. We find that the theory presented in the original GANs does not accommodate this practical solution. Under the alternating optimization manner, the vanilla loss function provides an inappropriate objective for the generator. This objective forces the generator to produce the output with the highest discriminative probability of the discriminator, which leads to mode collapse in GANs. To address this problem, we introduce a novel loss function for the generator to adapt to the alternating optimization nature. When updating the generator by the proposed loss function, the reverse Kullback-Leibler divergence between the model distribution and the target distribution is theoretically optimized, which encourages the model to learn the target distribution. The results of extensive experiments demonstrate that our approach can consistently boost model performance on various datasets and network structures.
Read full abstract