Abstract

In this paper, we propose a distributed architecture for reinforcement learning in a multi-agent environment, where agents share information learned via a distributed network. Here we propose a hybrid master/slave and peer-to-peer system architecture, where a master node effectively assigns a work load (a portion of the terrain) to each node. However, this master node also manages communications between all the other system nodes, and in that sense it is a peer-to-peer architecture. It is a loosely-coupled system in that node slaves only know about the existence of the master node, and are only concerned with their work load (portion of the terrain). As part of this architecture, we show how agents are allowed to communicate with other agents in the same or different nodes and share information that pertains to all agents, including the agent obstacle barriers. In particular, one main contribution of the paper is multi-agent reenforcement learning in a distributed system, where the agents do not have complete knowledge and information of their environment, other than what is available on the computing node, the particular agent (s) is (are) running on. We show how agents, running on same or different nodes, coordinate the sharing of their respective environment states/information to collaboratively perform their respective tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.