Abstract

Reinforcement learning has recently been studied in various fields and also used to optimally control real devices (e.g., robotic arms). In this paper, we try to allow multiple reinforcement learning agents to learn optimal control policy on their own devices of the same type but with slightly different dynamics. For such multiple devices, there is no guarantee that an agent who interacts only with one device and learns the optimal control policy will also control another device well. Therefore, we may need to apply independent reinforcement learning to each device individually, which requires time-consuming effort. To solve this problem, we propose a new federated reinforcement learning architecture where each agent working on its independent device shares their learning experience with each other, and transfers a mature policy model parameters into other agents. We incorporate the Actor-Critic PPO algorithm into each agent in the proposed collaborative architecture, and propose an efficient procedure for the gradient sharing and the model transfer. We also use edge computing to solve network problems that occur when training multiple real devices at the same time. Using multiple rotary inverted pendulum devices, we demonstrate that the proposed federated reinforcement learning scheme can effectively facilitate the learning process for multiple devices, and that the learning speed can be faster if more agents are involved.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.