Abstract
Unmanned surface vessel (USV) operations will change the future form of maritime wars profoundly, and one of the critical factors for victory is the cluster intelligence of USVs. Training USVs for combat using reinforcement learning (RL) is an important research direction. Sparse reward as one of the complex problems in reinforcement learning causes sluggish and inefficient USV training. Therefore, a modified random network distillation (MRND) algorithm is proposed for the sparse reward problem. This algorithm measures the weight of internal rewards by calculating the variance of the number of training steps in each training episode to adjust internal and external rewards dynamically. Through the self-play iterative training method, our algorithm, in conjunction with the classical proximal policy optimization (PPO) algorithm, can improve USV cluster intelligence rapidly. Based on USV cluster combat training environments constructed on Unity3D and ML-Agent Toolkits platform, three types of USV cluster combat simulations are conducted to validate the algorithm, including a target pursuit combat simulation, a USV cluster maritime combat simulation, and a USV cluster base offense and defense combat simulation. Simulation experiments have shown that USV clusters trained with the MRND algorithm converge quicker, acquire more rewards in fewer steps, and exhibit a higher level of intelligence than the USV cluster trained by the comparison algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.