In this paper, we consider the two-player state and control path-dependent stochastic zero-sum differential game. In our problem setup, the state process, which is controlled by the players, is dependent on (current and past) paths of state and control processes of the players. Furthermore, the running cost of the objective functional depends on both state and control paths of the players. We use the notion of non-anticipative strategies to define lower and upper value functionals of the game, where unlike the existing literature, these value functions are dependent on the initial states and control paths of the players. In the first main result of this paper, we prove that the (lower and upper) value functionals satisfy the dynamic programming principle (DPP), for which unlike the existing literature, the Skorohod metric is necessary to maintain the separability of càdlàg (state and control) spaces. We introduce the lower and upper Hamilton–Jacobi–Isaacs (HJI) equations from the DPP, which correspond to the state and control path-dependent nonlinear second-order partial differential equations. In the second main result of this paper, we show that by using the functional Itô calculus, the lower and upper value functionals are viscosity solutions of (lower and upper) state and control path-dependent HJI equations, where the notion of viscosity solutions is defined on a compact κ-Hölder space to use several important estimates and to guarantee the existence of minimum and maximum points between the (lower and upper) value functionals and the test functions. Based on these two main results, we also show that the Isaacs condition and the uniqueness of viscosity solutions imply the existence of the game value. Finally, we prove the uniqueness of classical solutions for the (state path-dependent) HJI equations in the state path-dependent case, where its proof requires establishing an equivalent classical solution structure as well as an appropriate contradiction argument.
Read full abstract