Abstract

Despite large strides in terms of generative adversarial networks (GANs) for image generation, evaluating and comparing GANs remains an open question. Several measures have been introduced, however, there is no consensus in terms of the best score. In this paper, we delve into the widely-used metric Inception Score (based on KL divergence), revealing that it fails to detect intra-class mode collapse. Meanwhile, Wasserstein distance has received much attention in comparing distributions in recent years but suffers heavy computational burden in high dimensional space. Our idea is that we can find specific embedding space where Euclidean distance could mimic Wasserstein distance to solve the heavy computational problem. This space can be found using a Siamese network, which could be trained quickly because of shared weights. We also apply several proposed new techniques to get better image embedding. To evaluate our proposed metric (Siamese Score), we simulate mode collapse using K-means clustering performed on real data set. To further validate it, we perform an empirical study on several GAN models and use the generated images to do the task. Experiments show that Siamese Score can detect mode collapse and is time-efficient compared with Inception Score and we think our score can be complementary to Inception Score.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call