Abstract
The mission of this chapter is to formalize multi-objective reinforcement learning (MORL) problems where there are multiple conflicting objectives with unknown weights. The objective is to collect all Pareto optimal policies in order to adapt them for use in a learner's situation. However, it takes huge learning costs in previous methods, so this chapter proposes the novel model-based MORL method by reward occurrence probability (ROP) with unknown weights. There are three main features. First one is that average reward of a policy is defined by inner product of the ROP vector and a weight vector. Second feature is that it learns the ROP vector in each policy instead of Q-values. Third feature is that Pareto optimal deterministic policies directly form the vertices of a convex hull in the ROP vector space. Therefore, Pareto optimal policies are calculated independently with weights and just one time by Quickhull algorithm. This chapter reports the authors' current work under the stochastic learning environment with up to 12 states, three actions, and three and four reward rules.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have