Abstract

Reinforcement learning has been applied to air combat problems in recent years, and the idea of curriculum learning is often used for reinforcement learning, but traditional curriculum learning suffers from the problem of plasticity loss in neural networks. Plasticity loss is the difficulty of learning new knowledge after the network has converged. To this end, we propose a motivational curriculum learning distributed proximal policy optimization (MCLDPPO) algorithm, through which trained agents can significantly outperform the predictive game tree and mainstream reinforcement learning methods. The motivational curriculum learning is designed to help the agent gradually improve its combat ability by observing the agent’s unsatisfactory performance and providing appropriate rewards as a guide. Furthermore, a complete tactical maneuver is encapsulated based on the existing air combat knowledge, and through the flexible use of these maneuvers, some tactics beyond human knowledge can be realized. In addition, we designed an interruption mechanism for the agent to increase the frequency of decision-making when the agent faces an emergency. When the number of threats received by the agent changes, the current action is interrupted in order to reacquire observations and make decisions again. Using the interruption mechanism can significantly improve the performance of the agent. To simulate actual air combat better, we use digital twin technology to simulate real air battles and propose a parallel battlefield mechanism that can run multiple simulation environments simultaneously, effectively improving data throughput. The experimental results demonstrate that the agent can fully utilize the situational information to make reasonable decisions and provide tactical adaptation in the air combat, verifying the effectiveness of the algorithmic framework proposed in this paper.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.