Abstract

The application of the Generative Adversarial Networks (GANs) model has increased rapidly in the past few years. This paper will demonstrate the training results of different GANs models in different epochs. Since GANs is a structure, they can be applied to different neural network layers such as Convolutional Neural Network (CNN) and multilayer perceptron (MLP). Different neural network layers are constructed and trained separately to compare their performance and efficiency in this experiment. To achieve this goal, all models perform the same task, generating handwritten numbers. In the experiment, three GANs models are tested. Two of them are based on MLP. They use the same structure and the same number of units. The only difference is the activation function. One uses the sigmoid function, and the other uses the LeakyRelu function. The last model is based on CNN. Specifically, the GANs model using Convolutional layers is called DCGAN. All models are constructed using TensorFlow, each of them was trained for 10, 20, 40, 80, 120, and 160 epochs. The generated images and loss value are recorded in all cases of the previously mentioned epochs of training. The experimental results demonstrated that Compared with the GANs model based on MLP, DCGAN has higher training efficiency and better image quality. Different activation functions can also affect the efficiency of the GANs model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call