Abstract
We consider a classical finite horizon optimal control problem for continuous-time pure jump Markov processes described by means of a rate transition measure depending on a control parameter and controlled by a feedback law. For this class of problems the value function can often be described as the unique solution to the corresponding Hamilton–Jacobi-Bellman equation. We prove a probabilistic representation for the value function, known as nonlinear Feynman–Kac formula. It relates the value function with a backward stochastic differential equation (BSDE) driven by a random measure and with a sign constraint on its martingale part. We also prove existence and uniqueness results for this class of constrained BSDEs. The connection of the control problem with the constrained BSDE uses a control randomization method recently developed by several authors. This approach also allows to prove that the value function of the original non-dominated control problem coincides with the value function of an auxiliary dominated control problem, expressed in terms of equivalent changes of probability measures.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have