Abstract

We consider an optimal control problem for piecewise deterministic Markov processes (PDMPs) on a bounded state space. A pair of controls acts continuously on the deterministic flow and on the two transition measures (in the interior and from the boundary of the domain) describing the jump dynamics of the process. For this class of control problems, the value function can be characterized as the unique viscosity solution to the corresponding fully nonlinear Hamilton--Jacobi--Bellman equation with a nonlocal type boundary condition. By means of the recent control randomization method, we are able to provide a probabilistic representation for the value function in terms of a constrained backward stochastic differential equation (BSDE), known as the nonlinear Feynman--Kac formula. This result considerably extends the existing literature, where only the case with no jumps from the boundary is considered. The additional boundary jump mechanism is described in terms of a non-quasi-left-continuous random measure and induces predictable jumps in the PDMP's dynamics. The existence and uniqueness results for BSDEs driven by such a random measure are nontrivial, even in the unconstrained case, as emphasized in the recent work [E. Bandini, Electron. Commun. Probab., 20 (2015), pp. 1--13].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call