Abstract

In this paper, we study a two-player zero-sum stochastic differential game with regime switching in the framework of forward-backward stochastic differential equations on a finite time horizon. By means of backward stochastic differential equation methods, in particular that of the notion from stochastic backward semigroups, we prove a dynamic programming principle for both the upper and the lower value functions of the game. Based on the dynamic programming principle, the upper and the lower value functions are shown to be the unique viscosity solutions of the associated upper and lower Hamilton–Jacobi–Bellman–Isaacs equations.

Highlights

  • We will investigate a two-player zero-sum SDG with regime switching in the framework of BSDE on a finite time horizon. e dynamics of the SDG are described by the following functional stochastic differential equation (SDE): for t ∈ [0, T]

  • Precise definitions of α and β are given

  • In the case W U we say that the game admits a value. e main objective of this paper is to show that W and U are, respectively, the unique viscosity solutions of the following lower and upper HJBI equations, and both are systems consisting of m coupled equations:

Read more

Summary

Introduction

We will investigate a two-player zero-sum SDG with regime switching in the framework of BSDE on a finite time horizon. e dynamics of the SDG are described by the following functional stochastic differential equation (SDE): for t ∈ [0, T],. We define the lower and the upper value functions W and U, respectively: W(t, x, i) ≔ essinf esssup J(t, x, i; u, β(u)), β∈Bt,T u∈Ut,T Under the assumptions (A3) and (A4), the following dynamic programming principle holds: for all 0 < δ ≤ T − t, x ∈ Rn, i ∈ M, U(t, x, i) esssup α∈At,t+δ essinf v∈Vt,t+δ

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call