Abstract

We are concerned in this paper with discrete-time Markov Decision Processes (MDPs) with Borel state and action spaces X and A, respectively, and the long run expected average cost criterion. When X is a denumerable set, man y necessary and/or sufficient conditions for the existence of optimal control policies are known. However, when X is a Borel space (i.e., a Borel subset of a complete separable metric space), most of the available results impose on the MDP very restrictive topological conditions (e.g., compactness) and/or strong recurrence assumptions (such as Doeblin's condition); see, e.g., [4,9, 12] and their references. Another related work is [7] where we have studied MDPs from the viewpoint of the recurrence (or ergodicity) properties of the state process. ln the present paper, however, we are concerned with the existence of average optimal policies by looking at (static) optimization problems (see condition CS in Section 4) related-in sorne cases equivalent-to the existence of a bounded solution to the so-called Optimality Equation (see C4 in Section 4). These optimization problems are dual in the sense that, under appropriate conditions, the existence of an optimal solution to one of the problems implies existence of an optimal

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.