Abstract

Generative adversarial networks (GANs) are most popular generative frameworks that have achieved compelling performance. They follow an adversarial approach where two deep models generator and discriminator compete with each other. In this paper, we propose a Generative Adversarial Network with best hyper-parameters selection to generate fake images for digit numbers 1–9 with generator and train discriminator to decide whereas the generated images are fake or true. Genetic algorithm (GA) technique was used to adapt GAN hyper-parameters, the resulted algorithm is named GANGA: generative adversarial network with genetic algorithm. The resulted algorithm has achieved high performance; it was able to get zero value of loss function for the generator and discriminator separately. Anaconda environment with tensorflow library facilitates was used; python as programming language was adapted with needed libraries. The implementation was done using MNIST dataset to validate the work. The proposed method is to let genetic algorithm choose best values of hyper-parameters depending on minimizing a cost function such as a loss function or maximizing accuracy function used to find best values of learning rate, batch normalization, number of neurons and a parameter of dropout layer.

Highlights

  • Many machine learning systems look at some kind of complicated input and produce a simple output

  • The results suggest that Conditional Generative Adversarial Networks (cGANs) are a suitable alternative for strategies calibration and combination, providing out performance when the traditional techniques fail to generate any alpha

  • We studied the effects of hyper-parameters in Generative Adversarial Networks (GANs), how the affect the generator and discriminator

Read more

Summary

Introduction

Many machine learning systems look at some kind of complicated input (say, an image) and produce a simple output (a categorical label like, ”cat” or numeric label like 1, 2, or any other number that represent a class). The goal of a generative model is something like the opposite: take a small piece of inputperhaps a few random numbers or vector of noise-and produce a complex output, like an image of a realistic-looking face. This involves modeling a probability distribution on images, that is, a function that tells us which images are likely to be faces and which are not This type of problem modeling a function on a high dimensional space is exactly the sort of thing neural networks are made for. In [8], authors proposes the use of Conditional Generative Adversarial Networks (cGANs) for trading strategies calibration and aggregation. They provide a full methodology on: (i) the training and selection of a cGAN for time series data;

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call