Abstract

Generative adversarial networks heavily rely on large datasets and carefully chosen model parameters to avoid model overfitting or mode collapse. Cutout with patch-loss augmentation, a dataset augmentation designed for generative adversarial networks that applies cutout to both the discriminator and the generator with a patch-loss structure and a new loss function, is proposed as a solution to the issue. It can enhance the performance of generative adversarial networks on full datasets and promote better convergence and stability on limited datasets. Additionally, the tensor value clamp is proposed, accelerating training speed without compromising quality. The proposed method can be successfully used with various generative adversarial networks, according to experiments. The performance of generative adversarial networks trained with full data on CIFAR-10 is matched by our method with only 20% of the training data. Finally, combined with our approach, StyleGAN2-ADA’s Fréchet Inception Distance (FID) results on the CIFAR-10, LSUN-CAT, and FFHQ-256 datasets can be further enhanced.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call