Abstract

AbstractAdversarial learning stability is one of the difficulties of generative adversarial networks (GANs), which is closely related to networks convergence and generated images quality. For improving the stability, the multi-penalty functions GANs (MPF-GANs) is proposed. In this novel GANs, penalty function method is used to transform unconstrained GANs model into constrained model to improve adversarial learning stability and generated images quality. In optimization divergence tasks, two penalty divergences (Wassertein distance and Jensen-Shannon divergence) are added in addition to the main optimization divergence (reverse Kullback-Leibler divergence). In network structure, in order to realize the multi-divergence optimization tasks, the generator and discriminator are multi-task networks. Every generator subtask corresponds to a discriminator subtask to optimize the corresponding divergence. In CELEBA and CIFAR10 data sets, the experimental results show that although the number of parameters is increased, the adversarial learning stability and generated images quality are significantly improved. The performance of the novel GANs is better than most GANs models, close to state-of-the-art models, SAGANs and SNGANs.KeywordsDeep learningGenerative adversarial networks (GANs)Adversarial learning stabilityMulti-penalty functionsMulti-task learning

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call