Abstract

AbstractThe close‐range autonomous air combat has gained significant attention from researchers involved in applications related to artificial intelligence (AI). A majority of the previous studies on autonomous air combat were focused on one‐on‐one air combat scenarios, however, the modern air combat is mostly conducted in formations. With regard to the aforementioned factors, a novel hierarchical maneuvering control architecture is introduced that is applied to the multi‐aircraft close‐range air combat scenario, which can handle air combat scenarios with variable‐size formation. Subsequently, three air combat sub‐tasks are designed, and recurrent soft actor‐critic (RSAC) algorithm combined with competitive self‐play (SP) is incorporated to learn the sub‐strategies. A novel hierarchical multi‐agent reinforcement learning (HMARL) algorithm is proposed to obtain the high‐level strategy for target and sub‐strategy selection. The training performance of the training algorithm of sub‐strategies and high‐level strategy in different air combat scenarios is evaluated. The obtained strategies are analyzed and it is found that the formations exhibit effective cooperative behavior in symmetric and asymmetric scenarios. Finally, the ideas of engineering implementation of the maneuvering control architecture are given. The study provides a solution for future multi‐aircraft autonomous air combat.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call