Abstract

We consider a stochastic differential game in the context of forward-backward stochastic differential equations, where one player implements an impulse control while the opponent controls the system continuously. Utilizing the notion of “backward semigroups” we first prove the dynamic programming principle (DPP) for a truncated version of the problem in a straightforward manner. Relying on a uniform convergence argument then enables us to show the DPP for the general setting. Our approach avoids technical constraints imposed in previous works dealing with the same problem and, more importantly, allows us to consider impulse costs that depend on the present value of the state process in addition to unbounded coefficients. Using the dynamic programming principle we deduce that the upper and lower value functions are both solutions (in viscosity sense) to the same Hamilton-Jacobi-Bellman-Isaacs obstacle problem. By showing uniqueness of solutions to this partial differential inequality we conclude that the game has a value.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.