Abstract

We study continuity and robustness properties of infinite-horizon average expected cost problems with respect to (controlled) transition kernels, and applications of these results to the problem of robustness of control policies designed for approximate models applied to actual systems. We show that sufficient conditions presented in the literature for discounted-cost problems are in general not sufficient to ensure robustness for average-cost problems. However, we show that the average optimal cost is continuous in the convergences of controlled transition kernel models where convergence of models entails (i) continuous weak convergence in state and actions, and (ii) continuous setwise convergence in the actions for every fixed state variable, in addition to either uniform ergodicity or some regularity conditions. We establish that the mismatch error due to the application of a control policy designed for an incorrectly estimated model to the true model decreases to zero as the incorrect model approaches the true model under the stated convergence criteria. Our findings significantly relax related studies in the literature which have primarily considered the more restrictive total variation convergence criteria. Applications to robustness to models estimated through empirical data (where almost sure weak convergence criterion typically holds, but stronger criteria do not) are studied and conditions for asymptotic robustness to data-driven learning are established.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call