Abstract

In the contemporary battlefield where complexity has increased, the enhancement of the role and ability of missiles has become crucial. Thus, missile guidance systems are required to be further developed in a more intelligent and autonomous way to deal with complicated environments. In this paper, we propose novel missile guidance laws using reinforcement learning, which can autonomously avoid obstacles and terrains in complicated environments with limited prior information and even without the need of off-line trajectory or waypoint generation. The proposed guidance laws are focused on two mission scenarios: the first is with planar obstacles, which is used to cope with maritime operations, and the second is with complex terrain, which is used to cope with land operations. We present the detailed design processes for both scenarios, including a neural network architecture, reward function selection, and training method. Simulation results are provided to show the feasibility and effectiveness of the proposed guidance laws and some important aspects are discussed in terms of their advantages and limitations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call