Abstract

Global photographic aesthetic image generation aims to ensure that images generated by generative adversarial networks (GANs) contain semantic information and have global aesthetic feelings. Existing image aesthetic generation algorithms are still in the exploratory stage, and images screened or generated by a computer have not yet achieved relatively ideal aesthetic quality. In this study, we use an existing generative model, StyleGAN, to build the height of image content and put forward a new method based on the GAN disentangled representation of a global aesthetic image generation algorithm by mining GANs’ latent space, potential global aesthetic feeling, and aesthetic editing of the original image to realize the aesthetic feeling and content of high-quality global aesthetic image generation. In contrast with the traditional aesthetic image generation methods, our method does not need to retrain GANs. Using the existing StyleGAN generation model, by learning a prediction model to score the generated image and the score as a label to learn a support vector machine decision surface, we use the learned decision to edit the original image to obtain an image with a global aesthetic feeling. This method solves the problems of poor content construction effect and poor global beauty of the aesthetic images generated by the existing methods. Experimental results show that the proposed method greatly increases the aesthetic score of the generated images and makes the generated images more in line with people’s aesthetic.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call