Abstract

This work studies dynamic coverage control of a multi-agent system using deep reinforcement learning. Dynamic coverage control is a type of cooperative control which requires a multi-agent system to dynamically monitor an area of interest over time. To develop motion control laws, most of previous works highly rely on the knowledge of system models, such as the environment model and agent kinematics/dynamics. However, acquiring an accurate model can be restrictive and even impossible in many practical applications. Another challenge is that agent often has a limited communication capability in practice. Two agents may only exchange information when they are within a certain distance. To address these challenges, a multi-agent deep reinforcement learning (MADRL) based control framework is developed to enable agents to learn control policies directly from interactions with the environment to achieve dynamic coverage control while preserving network connectivity. The developed MADRL is model free and employs decentralized execution and centralized training, in which agents coordinate using only local information and do not need to know other agents' strategies at execution phase. Numerical simulations demonstrate the effectiveness of the developed control strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call