Abstract

This paper presents an interplay between deep learning and game theory. It models basic deep learning tasks as strategic games. Then, distributionally robust games and their relationship with deep generative adversarial networks (GANs) are presented. To achieve a higher order convergence rate without using a second derivative of the objective function, a Bregman discrepancy is used to construct a speed-up deep learning. Each player has a continuous action space which corresponds to weight space and aims to learn his/her optimal strategy. The convergence rate of the proposed deep learning algorithm is derived using a mean estimate. Experiments are carried out on a real dataset in both shallow and deep GANs. Both qualitative and quantitative evaluation results show that the generative model trained by the Bregman deep learning algorithm can speed up the state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call