Abstract

AbstractIn this paper, we study the multi‐pursuer single‐evader pursuit‐evasion (MSPE) differential game in a continuous environment with the consideration of obstacles. We propose a novel pursuit‐evasion algorithm based on reinforcement learning and transfer learning. In the source task learning stage, we employ the Q‐learning and value function approximation method to overcome the challenges posed by the large‐scale storage space required by the conventional Q‐table solution method. This approach expands the discrete space to the continuous space by value function approximation and effectively reduces the demand for storage space. During the target task learning stage, we utilize the Gaussian mixture model (GMM) to classify the source tasks. The source policies whose corresponding state‐value sets have the highest probability densities are assigned for the agent in the target task for learning. This methodology not only effectively avoids negative transfer but also enhances the algorithm's generalization ability and convergence speed. Through simulation and experiment, we demonstrate the algorithm's effectiveness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call