Abstract

We consider the state and control path-dependent stochastic optimal control problem for jump-diffusion models, where the dynamics and the objective functional are dependent on (current and past) paths of state and control processes. We prove the dynamic programming principle of the value functional, for which, unlike the existing literature, the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Skorohod metric</i> is necessary to maintain the separability of càdlàg (state and control) spaces. We introduce the state and control path-dependent integro-type Hamilton–Jacobi–Bellman (PIHJB) equation, which includes the Lévy measure in the corresponding nonlocal path-dependent integral operator. Then, by using the functional Itô calculus of a càdlàg path, we show the verification theorem, which constitutes the sufficient condition for optimality in terms of the solution to the PIHJB equation. We finally apply our verification theorem to the linear-quadratic optimal control problem of jump-diffusion models with delay and the control path-dependent problem, for which the explicit optimal solutions are obtained by solving the corresponding PIHJB equation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call