Abstract

Traditional federated reinforcement learning methods aim to find an optimal global policy for all agents. However, due to the heterogeneity of the environment, the optimal global policy is often only a suboptimal solution. To resolve this problem, we propose a personalized federated reinforcement learning method, named perFedDC, which aims to establish an optimal personalized policy for each agent. Our method involves creating a global model and multiple local models, using the l2-norm to measure the distance between the global model and the local model. We introduce a distance constraint as a regularization term in the update of the local model to prevent excessive policy updates. While the distance constraint can facilitate experience sharing, it is important to strike a balance between personalization and sharing appropriately. As much as possible, agents benefit from the advantages of shared experience while developing personalization. The experiments demonstrated that perFedDC was able to accelerate agent training in a stable manner while still maintaining the privacy constraints of federated learning. Furthermore, newly added agents to the federated system were able to quickly develop effective policies with the aid of convergent global policies.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.