Abstract

AbstractTraining generative adversarial networks (GANs) are known to be difficult, especially for financial time series. This paper first analyzes the well‐posedness problem in GANs minimax games and the widely recognized convexity issue in GANs objective functions. It then proposes a stochastic control framework for hyper‐parameters tuning in GANs training. The weak form of dynamic programming principle and the uniqueness and the existence of the value function in the viscosity sense for the corresponding minimax game are established. In particular, explicit forms for the optimal adaptive learning rate and batch size are derived and are shown to depend on the convexity of the objective function, revealing a relation between improper choices of learning rate and explosion in GANs training. Finally, empirical studies demonstrate that training algorithms incorporating this adaptive control approach outperform the standard ADAM method in terms of convergence and robustness. From GANs training perspective, the analysis in this paper provides analytical support for the popular practice of “clipping,” and suggests that the convexity and well‐posedness issues in GANs may be tackled through appropriate choices of hyper‐parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call