Abstract

A conditional generative adversarial network (cGAN) is a generative adversarial network (GAN) that generates data with a desired condition from a latent vector. Among the different types of cGAN, the auxiliary classifier GAN (ACGAN) is the most frequently used. In this study, we describe the problems of an AC-GAN and propose replacing it with a conditional activation GAN (CAGAN) to reduce the number of hyperparameters and improve the training speed. The loss function of a CAGAN is defined as the sum of the loss of each GAN created for each condition. The proposed CAGAN is an integration of multiple GANs, where each GAN shares all hidden layers, and their integration can be considered as a single GAN. Therefore, the structure of the integrated GANs does not significantly increase the number of computations. Additionally, to prevent the conditions given in the discriminator of a cGAN from being ignored with batch normalization, we propose mixed batch training, in which every batch for the discriminator keeps the ratio of the real and generated data consistent.

Highlights

  • A conditional generative adversarial network [1] is a generative adversarial network (GAN) [2] that can generate data with a desired condition from a latent vector

  • We describe the reasons for modifying an auxiliary classifier GAN (ACGAN) and its disadvantages

  • We propose a conditional activation GAN (CAGAN) that can replace an ACGAN to reduce the number of hyperparameters and improve the training speed to overcome the ACGAN problems mentioned above

Read more

Summary

INTRODUCTION

A conditional generative adversarial network (cGAN) [1] is a generative adversarial network (GAN) [2] that can generate data with a desired condition from a latent vector. In an ACGAN, when the real and generated data distributions are the same, the auxiliary classifier of the discriminator and generator can be considered as a group of GANs, wherein each GAN trains condition using crossentropy adversarial loss and shares all hidden layers. In CAGAN, meaningful gradients are generated even at the early stages of training because each GAN can be trained through an advanced adversarial loss that generates meaningful gradients, even if the real and generated data distributions differ For this reason, the performance of CAGAN is better than that of ACGAN. When the training generator and discriminator are unbalanced, the target ratio may not be 50:50, but in general, 50:50 is used

MATERIAL AND METHODS
MNIST EXPERIMENT
EXPERIMENTAL RESULTS AND DISCUSSION
VIII. CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call