Abstract

Generative Adversarial Networks (GANs) have received immense attention in recent years due to their ability to capture complex, high-dimensional data distributions without the need for extensive labeling. Since their conception in 2014, a wide array of GAN variants have been proposed featuring alternative architectures, optimizers, and loss functions with the goal of improving performance and training stability. This manuscript focuses on quantifying the resilience of a GAN architecture to specific modes of image degradation. We conduct systematic experimentation to empirically determine the effects of 10 fundamental image degradation modes, applied to the training image dataset, on the Fréchet inception distance (FID) of images generated by a conditional deep convolutional GAN (cDCGAN). We find that at the α=0.05 level, brightening, darkening, and blurring are statistically significantly more detrimental to the resulting GAN image quality than removing the degraded data completely, while other degradations are typically safe to keep in training datasets. Additionally, we find that in the case of randomized partial occlusion, the FID of the resulting GAN images approaches that of the degraded training set for increasing levels of occlusion, with the surprising result that GAN FID performance is equal to that of the training set at 75% degradation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call