Abstract

In this paper, Weighted reward Perturbed Markov Decision Processes with finite state and countable action spaces (semi-infinite WMDP for short) are considered. The ”weighted reward” refers to appropriately normalized convex combination of the discounted and the long-run average reward criteria. This criterion allows the controller to trade-off short-term rewards versus long-run rewards. In every application where both the discounted and the long-run average criteria have been proposed in the past, there is clearly a rationale for considering the weighted criterion. Of course, as with all Markov decision models, the standard weighted criterion model assumes that all the transition probabilities are known precisely. Since, in most applications this would not be the case, we consider the perturbed version of the weighted reward model. In the case of perturbations, we prove that for many models a nearly optimal strategy can be found in the class of relatively “simple ultimately deterministic” strategies. These are strategies which behave just like deterministic stationary strategies, after a certain point of time.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.