Abstract

When solving optimal impulse control problems, one can use the dynamic programming approach in two different ways: at each time moment, one can make the decision whether to apply a particular type of impulse, leading to the instantaneous change of the state, or apply no impulses at all; or, otherwise, one can plan an impulse after a certain interval, so that the length of that interval is to be optimised along with the type of that impulse. The first method leads to the differential Bellman equation, while the second method leads to the integral Bellman equation. The target of the current article is to prove the equivalence of those Bellman equations in many specific models. Those include abstract dynamical systems, controlled ordinary differential equations, piece-wise deterministic Markov processes and continuous-time Markov decision processes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call