Abstract

Image generation has always been a study hotspot in machine learning, which aims to build models to learn specific semantic distributions from massive image data to generate realistic simulated images. Thanks to the deep learning technology’s quick development, generative models are constantly being developed and huge success has been achieved in image generation tasks. According to difference between generative models, the existing image generation methods based on deep learning can mainly be separated into three models: image generation based on Variational Autoencoder (VAE), image generation based on Generative Adversarial Network (GAN) and image generation combined the VAE and GAN. Focusing on the three frameworks, in this paper, the development process and related principles of each type of generation model are described respectively. After that, the different generation results of different generation models for the agreed training set are compared intuitively, the advantages and problems of various models are proposed, and reasonable improvement measures are proposed for some problems. Finally, the development prospects of various models are prospected.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call