Abstract

As an important and challenging problem, image generation with limited data aims at generating realistic images through training a GAN model given few samples. A typical solution is to transfer a well-trained GAN model from a data-rich source domain to the data-deficient target domain. In this paper, we propose a novel self-supervised transfer scheme termed D 3 T-GAN, addressing the cross-domain GANs transfer in limited image generation. Specifically, we design two individual strategies to transfer knowledge between generators and discriminators, respectively. To transfer knowledge between generators, we conduct a data-dependent transformation, which projects target samples into the latent space of source generator and reconstructs them back. Then, we perform knowledge transfer from transformed samples to generated samples. To transfer knowledge between discriminators, we design a multi-level discriminant knowledge distillation from the source discriminator to the target discriminator on both the real and fake samples. Extensive experiments show that our method improves the quality of generated images and achieves the state-of-the-art FID scores on commonly used datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call