Abstract

As design, deployment and operation complexity increase in mobile systems, adaptive self-learning techniques have become essential enablers in mitigation and control of the complexity problem. Artificial intelligence and, in particular, reinforcement learning has shown great potential in learning complex tasks through observations. The majority of ongoing reinforcement learning research activities focus on single-agent problem settings with an assumption of accessibility to a globally observable state and action space. In many real-world settings, such as LTE or 5G, decision making is distributed and there is often only local accessibility to the state space. In such settings, multi-agent learning may be preferable, with the added challenge of ensuring that all agents collaboratively work towards achieving a common goal. We present a novel cooperative and distributed actor-critic multi-agent reinforcement learning algorithm. We claim the approach is sample efficient, both in terms of selecting observation samples and in terms of assignment of credit between subsets of collaborating agents.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call