Abstract

We consider a dynamic programming problem with arbitrary state space and bounded rewards. Is it possible to uniquely define a limit value for the problem, when the ``patience" of the decision-maker tends to infinity ? We consider, for each evaluation $\theta$ (a probability distribution over positive integers) the value function $v_{\theta}$ of the problem where the weight of any stage $t$ is given by $\theta_t$, and we investigate the uniform convergence of a sequence $(v_{\theta^k})_k$ when the ``impatience" of the evaluations vanishes, in the sense that $\sum_{t} | \theta^k_{t}-\theta^k_{t+1}| \rightarrow_{k \to \infty} 0.$ We prove that this uniform convergence happens if and only if the metric space $\{v_{\theta^k}, k\geq 1\}$ is totally bounded. Moreover there exists a particular function $v^*$, independent of the particular chosen sequence $({\theta^k})_k$, such that any limit point of such sequence of value functions is precisely $v^*$. The result applies in particular to discounted payoffs when the discount factor vanishes, as well as to average payoffs where the number of stages goes to infinity, and extends to models with stochastic transitions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.