Abstract

In this paper, we consider a distributed reinforcement learning setting where agents are communicating with a central entity in a shared environment to maximize a global reward. A main challenge in this setting is that the randomness of the wireless channel perturbs each agent&#x2019;s model update while multiple agents&#x2019; updates may cause interference when communicating under limited bandwidth. To address this issue, we propose a novel distributed reinforcement learning algorithm based on the alternating direction method of multipliers (ADMM) and &#x201C;<i>over air aggregation</i>&#x201D; using analog transmission scheme, referred to as A-RLADMM. Our algorithm incorporates the wireless channel into the formulation of the ADMM method, which enables agents to transmit each element of their updated models over the same channel using analog communication. Numerical experiments on a multi-agent collaborative navigation task show that our proposed algorithm significantly outperforms the digital communication baseline of A-RLADMM (D-RLADMM), the lazily aggregated policy gradient (RL-LAPG), as well as the analog and the digital communication versions of the vanilla FL, (A-FRL) and (D-FRL) respectively.

Highlights

  • Owing to the strict and stringent requirements for 5G and beyond applications such as industry 4.0, network edge intelligence is of paramount importance [1]

  • We focus on the fully cooperative setting, which represents a great portion of the Multi-agent reinforcement learning (MARL) settings where multiple agents interact in a shared environment and collaborate towards maximizing their rewards

  • Simulations results show that our proposed algorithm significantly outperforms the digital communication version of A-RLADMM (D-RLADMM), the lazily aggregated policy gradient (RL-LAPG), the digital communication version of vanilla FL (D-FRL) as well as the analog version of FL (A-FRL) since it significantly reduces the number of communication uploads

Read more

Summary

INTRODUCTION

Owing to the strict and stringent requirements for 5G and beyond applications such as industry 4.0, network edge intelligence is of paramount importance [1]. One key challenge in these applications is how to optimize distributed systems where different entities (agents) communicate wirelessly in the same environment and share limited communication resources (e.g. limited bandwidth). We focus on the fully cooperative setting, which represents a great portion of the Multi-agent reinforcement learning (MARL) settings where multiple agents interact in a shared environment and collaborate towards maximizing their rewards. MARL entails sequential decision making procedures, where agents take different actions over sequences of time in a stochastic environment. Learning where data distribution is stationary, the distribution used to sample data in the RL setting depends on timevarying policy parameters, which introduces non-stationarity and makes the problem more challenging. Many MARL algorithms were proposed to solve real-world problems such as spectrum sharing [2], 360 degree video streaming [3], multiplayer gaming [4], and robot navigation [5]

Related Works
Our Contributions
Problem Statement
Policy Parametrization
Static and Noise Free Channel
Time-varying and Noisy Channels
27: All agents in parallel: 28
A-RLADMM FRAMEWORK
NUMERICAL EVALUATION
Problem Set-up
Network and Communication Environment
Baselines
Results and Discussion
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.