Abstract

For two-person dynamic zero-sum games (both discrete and continuous settings), we investigate the limit of value functions of finite horizon games with long run average cost as the time horizon tends to infinity and the limit of value functions of $\lambda$-discounted games as the discount tends to zero. We prove that the Dynamic Programming Principle for value functions directly leads to the Tauberian Theorem---that the existence of a uniform limit of the value functions for one of the families implies that the other one also uniformly converges to the same limit. No assumptions on strategies are necessary. To this end, we consider a mapping that takes each payoff to the corresponding value function and preserves the sub- and super- optimality principles (the Dynamic Programming Principle). With their aid, we obtain certain inequalities on asymptotics of sub- and super- solutions, which lead to the Tauberian Theorem. In particular, we consider the case of differential games without relying on the existence of the saddle point; a very simple stochastic game model is also considered.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call