Abstract
Iwamoto recently established a formal transformation via an invariant imbedding to construct a controlled Markov chain that can be solved in a backward manner, as in backward induction for finite-horizon Markov decision processes (MDPs), for a given controlled Markov chain with non-additive forward recursive objective function criterion. Chang et al. presented formal methods, called “parallel rollout” and “policy switching,” of combining given multiple policies in MDPs and showed that the policies generated by both methods improve all of the policies that the methods combine. This brief paper extends the methods of parallel rollout and policy switching for forward recursive objective function criteria and shows that the similar property holds as in MDPs. We further discuss how to implement these methods via simulation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.