Abstract

Animation culture in the 20th century has long been free from paper cartoons. From the beginning, when painters drew comics characters with a pen, people can now use a computer to generate anime characters. As with most forms of art, anime characters are designed and created by the artist himself, but if the production of anime characters is automated using the Generative Adversarial Network (GAN), will different forms of anime characters be ended up? Generative Adversarial Networks (GAN) is a deep learning model and one of the most promising methods for unsupervised learning on complex distributions in recent years. The model produces quite good output through mutual game learning of (at least) two modules in the framework: Generative (G) Model and Discriminative (D) Model. The generator is a network that generates images and the Discriminator is a network that identifies as evaluating whether an image is “real” or not. For instance, the generative model gives a series of pictures of dogs and then generates a new picture of a dog that is not in the data set. The discriminant model gives a picture and then judges whether the animal in this picture is a cat or a dog. This paper will also use DCGAN. DCGAN is not only related to GAN but also related to CNN. Both the discriminator and generator of Deep Convolutional Generative Adversarial Networks (DCGAN) use Convolutional Neural Network (CNN) to replace the multilayer perceptron in GAN. DCGAN, as the name implies, a GAN is added to the deep convolution. It mainly constitutes a convolution layer with no maximum pooling or full connection layer. It uses the convergence of Convolutional stride and transpose for downsampling and upsampling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call