Abstract
Motivated by a planning horizon result for continuous time Markov decision chains, we study decision rules, called preferred, which may be used in the initially stationary part of nearly optimal policies. We characterize these rules and then, under conditions involving state recurrence and accessibility, consider finding such rules. We also discuss the connection between preferred rules and certain discounted process decision rules, and the role of preferred rules in optimal policies.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.