Abstract

This paper presents a successive approximation method for solving systems of nested functional equations which arise, e.g., when considering Markov renewal programs in which policies that are maximal gain or optimal under more selective discount—and average overtaking optimality criteria are to be found. In particular, a successive approximation method is given to find the optimal bias vector and bias-optimal policies. Applications with respect to a number of additional stochastic control models are pointed out. Our method is based on systems of simultaneously generated (single-equation) value-iteration schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call