SummaryFréchet inception distance (FID) has gained a better reputation as an evaluation metric for generative adversarial networks (GANs). However, it is subjected to fluctuation, namely, the same GAN model, when trained at different times can have different FID scores, due to the randomness of the weight matrices in the networks, stochastic gradient descent, and the embedded distribution (activation outputs at a hidden layer). In calculating the FIDs, embedded distribution plays the key role and it is not a trivial question from where obtaining it since it contributes to the fluctuation also. In this article, I showed that embedded distribution can be obtained from three different subspaces of the weight matrix, namely, from the row space, the null space, and the column space, and I analyzed the effect of the each space to Fréchet distances (FDs). Since the different spaces show different behaviors, choosing a subspace is not an insignificant decision. Instead of directly using the embedded distribution obtained from hidden layer's activations to calculate the FD, I proposed to use projection of embedded distribution onto the null space of the weight matrix among the three subspaces to avoid the fluctuations. My simulation results conducted at MNIST, CIFAR10, and CelebA datasets, show that, by projecting the embedded distributions onto the null spaces, possible parasitic effects coming from the randomness are being eliminated and reduces the number of needed simulations in MNIST dataset, in CIFAR10, and in CelebA dataset.
Read full abstract