Abstract
Generative Adversarial Networks (GANs) are deep-learning-based generative models. This paper presents three methods to infer the input to the generator of auxiliary classifier generative adversarial networks (ACGANs), which are a type of conditional GANs. The first two methods, named i-ACGAN- r and i-ACGAN-d, are “inverting” methods, which obtain an inverse mapping from an image to the class label and the latent sample. By contrast, the third method, referred to as i-ACGAN-e, directly infers both the class label and the latent sample by introducing an encoder into an ACGAN. The three methods were evaluated on two natural scene datasets, using two performance measures: the class recovery accuracy and the image reconstruction error. Experimental results show that i-ACGAN-e outperforms the other two methods in terms of the class recovery accuracy. However, the images generated by the other two methods have smaller image reconstruction errors. The source code is publicly available from https://github.com/XMPeng/Infer-Input-ACGAN.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.