Abstract

Generative Adversarial Networks (GANs) have recently received a lot attention due to the promising performance in image generation, inpainting and style transfer. However, GANs and their variants still face several challenges, including vanishing gradients, mode collapse and unbalanced training between generator and discriminator, which limits further improvement and application of GANs. In this paper, we propose the Max-Margin Generative Adversarial Networks (MMGANs) to approach these challenges by substituting the sigmoid cross-entropy loss of GANs with a max-margin loss. We present the theoretical guarantee regarding merits of max-margin loss to solve the above problems in GANs. Experiments on MNIST and CelebA have shown that MMGANs have three main advantages compared with regular GANs. Firstly, MMGANs is robust to vanishing gradients and mode collapse. Secondly, MMGANs have good stability and strong balance ability during the training process. Thirdly, MMGANs can be easily expanded to multi-class classification tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call