Abstract

Faced with the challenges of security strategy design brought about by the complexity of attack behavior and the dynamism of network structure, dynamic hierarchical intelligent defense methods have shown their effectiveness. However, in complex network environments, their application requires a higher level of coordination mechanisms. Therefore, this paper proposes a hierarchical multi-agent reinforcement learning network attack and defense game and cooperative defense decision-making method, which autonomously and efficiently completes the formulation of defense strategies and defense behavior responses. Firstly, we construct a Stackelberg hypergame model of cyberspace conflicts, and under the condition of information loss, characterize the multi-layer dynamic defense coordination response mechanism. Secondly, By utilizing a hierarchical multi-agent reinforcement learning method as the driving force for game evolution, we sequentially solve the Nash equilibrium of the game, and form a dynamic autonomous defense strategy. Finally, we construct a hierarchical multi-agent reinforcement learning framework, which decouples the defense decision problem, reduces the dimension of the defense action space and the exploration difficulty of the strategy space, and learns coordinated defense strategies more efficiently. We used the CybORG (Cyber Operations Research Gym) environment for simulation. We compared and analyzed the autonomously generated cyber defense strategies with other related works, verifying the superior coordination performance of our method in defense strategy generation and control.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call