Abstract

This paper proposes a novel guidance law for intercepting a high-speed maneuvering target based on deep reinforcement learning, which mainly includes the interceptor–target relative motion model and value function approximation model based on deep Q-Network (DQN) with prioritized experience replay. First, a method called prioritized experience replay is applied to extract more efficient samples and reduce the training time. Second, to cope with the discrete action space of DQN, a normal acceleration is introduced to the state space, and the normal acceleration rate is chosen as the action. Then, the continuous normal acceleration command is obtained using numerical integral method. Third, to make the line-of-sight (LOS) rate converge rapidly, the reward function whose absolute value tends to zero has been constructed. Finally, compared with proportional navigation guidance (PNG) and the Q-Learning-based guidance law (QLG), the simulation experiments are implemented to intercept high-speed maneuvering targets at different acceleration policies. Simulation results demonstrate that the proposed DQN-based guidance law (DQNG) can obtain continuous acceleration command, make the LOS rate converge to zero rapidly, and hit the maneuvering targets using only the LOS rate. It also confirms that DQNG can realize the parallel-like approach and improve the interception performance of the interceptor to high-speed maneuvering targets. The proposed DQNG also has the advantages of avoiding the complicated formula derivation of traditional guidance law and eliminates the acceleration buffeting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call