Abstract

In recent years, deep reinforcement learning (DRL) has proved its great potential in multi-agent cooperation. However, how to apply DRL to multi-target tracking (MTT) problem for unmanned aerial vehicle (UAV) swarms is challenging: 1) the scale of UAVs may be large, but the existing multi-agent reinforcement learning (MARL) methods that rely on global or joint information of all agents suffer from the dimensionality curse; 2) the dimension of each UAV’s received information is variable, which is incompatible with the neural networks with fixed input dimensions; 3) the UAVs are homogeneous and interchangeable that each UAV’s policy should be irrelevant to the permutation of its received information. To this end, we propose a DRL method for UAV swarms to solve the MTT problem. Firstly, a decentralized swarm-oriented Markov Decision Process (MDP) model is presented for UAV swarms, which involves each UAV’s local communication and partial observation. Secondly, to achieve better scalability, a cartogram feature representation (FR) is proposed to integrate the variable-dimensional information set into a fixed-shape input variable, and the cartogram FR can also maintain the permutation irrelevance to the information. Then, the double deep Q-learning network with dueling architecture is adapted to the MTT problem, and the experience-sharing training mechanism is adopted to learn the shared cooperative policy for UAV swarms. Extensive experiments are provided and the results show that our method can successfully learn a cooperative tracking policy for UAV swarms and outperforms the baseline method in the tracking ratio and scalability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call