Abstract

A new approach to missile guidance law design is proposed, where reinforcement learning (RL) is used to learn a homing-phase guidance law that is optimal with respect to the missile’s airframe dynamics as well as sensor and actuator noise and delays. It is demonstrated that this new approach results in a guidance law giving superior performance to either PN guidance or enhanced PN guidance laws developed using Lyapunov theory. Although optimal control theory can be used to derive an optimal control law under certain idealized modeling assumptions, we discuss how the RL approach gives more flexibility and higher expected performance for real-world systems. Nomenclature

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call