Abstract
This article develops a fully decentralized multiagent algorithm for policy evaluation. The proposed scheme can be applied to two distinct scenarios. In the first scenario, a collection of agents have distinct datasets gathered by following different behavior policies (none of which is required to explore the full state space) in different instances of the same environment and they all collaborate to evaluate a common target policy. The network approach allows for efficient exploration of the state space and allows all agents to converge to the optimal solution even in situations where neither agent can converge on its own without cooperation. The second scenario is that of multiagent games, in which the state is global and rewards are local. In this scenario, agents collaborate to estimate the value function of a target team policy. The proposed algorithm combines off-policy learning, eligibility traces, and linear function approximation. The proposed algorithm is of the variance-reduced kind and achieves linear convergence with O(1) memory requirements. The linear convergence of the algorithm is established analytically and simulations are used to illustrate the effectiveness of the method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.