Abstract

Researchers are showing great interest in Generative Adversarial Networks (GANs), which use deep learning techniques to mimic the content of datasets and are particularly adept at data generation. Despite their impressive performance, there is uncertainty about how GANs precisely map latent space vectors to realistic images and how the chosen dimensionality of the latent space affects the quality of the generated images. In this paper, we explored the potential of generative models in generating animal face images. For this purpose, we used the Deep Convolutional Generative Adversarial Network (DCGAN) model as a reference. To analyze the impact of selected latent space vectors, we synthesized animal face images by training data representations in the DCGAN model with the well-known AFHQ dataset from the literature. We compared the quantitative evaluation of the produced images using Fréchet Inception Distance (FID) and Inception Score (IS). As a result, we demonstrated that generative models can produce images with latent sizes significantly smaller and larger than the standard size of 100.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call