Abstract

The integral probability metric (IPM) equips generative adversarial nets (GANs) with the necessary theoretical support for comparing statistical moments in an embedded domain of the critic, while stabilising their training and mitigating the mode collapse issues. For enhanced intuition and physical insight, we introduce a generalisation of IPM-GANs which operates by directly comparing probability distributions rather than their moments. This is achieved through characteristic functions (CFs), a powerful tool that uniquely comprises all information about any general distribution. For rigour, we first theoretically prove the ability of the CF loss to compare probability distributions, and proceed to establish the physical meaning of the phase and amplitude of CFs. An optimal sampling strategy is then developed to calculate the CFs, and an equivalence between the embedded and data domains is proved under the reciprocal theory. This makes it possible to seamlessly combine IPM-GAN with an auto-encoder structure by an advanced anchor architecture, which adversarially learns a semantic low-dimensional manifold for both generation and reconstruction. This efficient reciprocal CF GAN (RCF-GAN) structure, uses only two modules and a simple training strategy to achieve the state-of-the-art bi-directional generation. Experiments demonstrate the superior performance of RCF-GAN on both regular (images) and irregular (graph) domains.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call