Abstract

This paper investigates the problem of impact-time-control guidance law with the time-varying velocity caused by gravity and aerodynamic drag. Using the deep reinforcement learning (DRL) algorithm, we propose a novel impact time control guidance (ITCG) law in which a DRL agent is trained from scratch without using any prior knowledge. Different from the traditional ITCG law, the proposed method doesn't rely on the time-to-go estimation, which is difficult to derive and inaccurate with the time-varying velocity. Further, a prioritized experience replay method and a novel action exploration method are introduced in the DRL algorithm to improve learning efficiency. Additionally, the agent action is shaped to provide smooth guidance command, which avoids the problem that the guidance command generated by the intelligent algorithm may not be continuous. Numerical simulations are conducted to support the validity of the proposed algorithm.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.