Abstract
AbstractModel‐free deep reinforcement learning (DRL) is regarded as an effective approach for multi‐target cognitive electronic reconnaissance (MCER) missions. However, DRL networks with poor generalisation can significantly reduce mission completion rates when parameters such as reconnaissance area size, target number, and platform speed vary slightly. To address this issue, this paper introduces a novel scene reconstruction method for MCER missions and a mission group adaptive transfer deep reinforcement learning (MTDRL) algorithm. The algorithm enables quick adaptation of reconnaissance strategies for varied mission scenes by transferring strategy templates and compressing multi‐target perception states. To validate the method, the authors developed a transfer learning model for unmanned aerial vehicle (UAV) MCER. Three sets of experiments are conducted by varying the reconnaissance area size, the target number, and the platform speed. The results show that the MTDRL algorithm outperforms two commonly used DRL algorithms, with an 18% increase in mission completion rate and a 5.49 h reduction in training time. Furthermore, the mission completion rate of the MTDRL algorithm is much higher than that of a typical non‐DRL algorithm. The UAV demonstrates stable hovering and repeat reconnaissance behaviours at the radar detection boundary, ensuring flight safety during missions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.