Abstract

The main goal of this paper is to study the infinite-horizon expected discounted continuous-time optimal control problem of piecewise deterministic Markov processes with the control acting continuously on the jump intensity $\lambda$ and on the transition measure $Q$ of the process but not on the deterministic flow $\phi$. The contributions of the paper are for the unconstrained as well as the constrained cases. The set of admissible control strategies is assumed to be formed by policies, possibly randomized and depending on the history of the process, taking values in a set valued action space. For the unconstrained case we provide sufficient conditions based on the three local characteristics of the process $\phi$, $\lambda$, $Q$ and the semicontinuity properties of the set valued action space, to guarantee the existence and uniqueness of the integro-differential optimality equation (the so-called Bellman--Hamilton--Jacobi equation) as well as the existence of an optimal (and $\delta$-optimal, as well) ...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call