Abstract

We devote this paper to a theoretic analysis of deep neural networks from a game-theoretical perspective. We consider a general deep neural network D with linear activation functions f(x)=x+b. We show that the deep neural network can be transformed into a non-atomic congestion game, regardless whether it is fully connected or locally connected. Moreover, we show that learning the weight and bias vectors of D for a training set H is equivalent to computing an optimal solution of the corresponding non-atomic congestion game. In particular, when D is a deep neural network for a classification task, then the learning is equivalent to computing a Wardrop equilibrium of the corresponding non-atomic congestion game.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call