Abstract

We consider a Markov Decision Process (MDP), over a finite or infinite horizon, augmented by so-called (σ,ρ)-burstiness constraints. Such constraints, which had been introduced within the framework of network calculus, are meant to limit some additive quantity to a given rate over any time interval, plus a term which allows for occasional and limited bursts. We introduce this class of constraints for MDP models, and formulate the corresponding constrained optimization problems. Due to the burstiness constraints, constrained optimal policies are generally history-dependent. We use a recursive form of the constraints to define an augmented-state model, for which sufficiency of Markov or stationary policies is recovered and the standard theory may be applied, albeit over a larger state space. The analysis is mainly devoted to a characterization of feasible policies, followed by application to the constrained MDP optimization problem. A simple queuing example serves to illustrate some of the concepts and calculations involved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call