Abstract

Cooperative multiagent reinforcement learning approaches are increasingly being used to make decisions in contested and dynamic environments, which tend to be wildly different from the environments used to train them. As such, there is a need for a more in-depth understanding of their resilience and robustness in conditions such as network partitions, node failures, or attacks. In this article, we propose a modeling and simulation framework that explores the resilience of four c-MARL models when faced with different types of attacks, and the impact that training with different perturbations has on the effectiveness of these attacks. We show that c-MARL approaches are highly vulnerable to perturbations of observation, action reward, and communication, showing more than 80% drop in the performance from the baseline. We also show that appropriate training with perturbations can dramatically improve performance in some cases, however, can also result in overfitting, making the models less resilient against other attacks. This is a first step toward a more in-depth understanding of the resilience c-MARL models and the effect that contested environments can have on their behavior and toward resilience of complex systems in general.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.