Abstract

In this paper, we study a class of zero-sum two-player stochastic differential games with the controlled stochastic differential equations and the payoff/cost functionals of recursive type. As opposed to the pioneering work by Fleming and Souganidis [Indiana Univ. Math. J. 38 (1989) 293–314] and the seminal work by Buckdahn and Li [SIAM J. Control Optim. 47 (2008) 444–475], the involved coefficients may be random, going beyond the Markovian framework and leading to the random upper and lower value functions. We first prove the dynamic programming principle for the game, and then under the standard Lipschitz continuity assumptions on the coefficients, the upper and lower value functions are shown to be the viscosity solutions of the upper and the lower fully nonlinear stochastic Hamilton–Jacobi–Bellman–Isaacs (HJBI) equations, respectively. A stability property of viscosity solutions is also proved. Under certain additional regularity assumptions on the diffusion coefficient, the uniqueness of the viscosity solution is addressed as well.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call