Abstract

Owing to their great success in unsupervised tasks, generative adversarial networks (GANs) are widely adopted for supervised (conditional) image-generation tasks such as in-painting. Unfortunately, designing an objective function using GANs is not trivial. For supervised tasks, the generator trained only with GAN loss does not yield the corresponding target for the given input because GAN is trained to match the data distribution, not to find the exact answer. Therefore, the loss function of a generator is often formulated by linearly combining supervised loss and GAN loss as similar to multi-task learning, expecting that each loss’s weakness is complemented. Contrary to expectation, both losses cause a conflict in practice due to different optimum of each loss, yielding low objective and subjective image qualities. To address this problem, we empirically investigated the conflict caused by using a conventional GAN with pixel-wise losses; we then propose a novel (relativistic) accuracy-aware discriminator. Based on the proposed discriminator, we developed an accuracy-aware GAN (AAGAN) and proved its optimality under an ideal assumption. We then propose a relativistic accuracy-aware GAN (RAAGAN) by considering practical assumptions. Experimental results on supervised tasks demonstrated that the proposed schemes alleviated the competition between losses and outperformed conventional GANs in terms of both objective and subjective qualities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call