Abstract

In target localization applications, readings from multiple sensing agents are processed to identify a target location. The localization systems using stationary sensors use data fusion methods to estimate the target location, whereas other systems use mobile sensing agents (UAVs, robots) to search the area for the target. However, such methods are designed for specific environments, and hence are deemed infeasible if the environment changes. For instance, the presence of walls increases the environment’s complexity and affects the collected readings and the mobility of the agents. Recent works explored Deep Reinforcement Learning (DRL) as an efficient and adaptable approach to tackle the target search problem. However, such methods are either designed for single-agent systems or for non-complex environments. This work proposes two novel Multi-Agent DRL models for target localization through search in complex environments. The first model utilizes Proximal Policy Optimization, Convolutional Neural Networks, Convolutional AutoEncoders to create embeddings, and a shaped reward function using Breadth First Search to obtain cooperative agents that achieve fast localization at low cost. The second model improves the first model in terms of computational complexity by replacing the shaped reward with a simple sparse reward, subject to the availability of Expert Demonstrations. Expert demonstrations are used in Demonstration Cloning, a novel method that utilizes demonstrations to guide the learning of new agents. The proposed models are tested on a scenario of radioactive target localization, and benchmarked with existing methods, showing efficacy in terms of localization time and cost, in addition to learning speed and stability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call