Abstract

Although multi-agent deep deterministic policy gradient is a classic deep reinforcement learning algorithm in multi-agent systems. It also has critical problems such as poor training stability and low policy robustness, which significantly limit the capability and application of the algorithm. So this article proposes an improved algorithm called friend-or-foe multi-agent deep deterministic policy gradient for solving the above problems. The main innovations are as follows: (1) inspired by the concept of friend-or-foe game theory, we modified the framework of the original multi-agent deep deterministic policy gradient by using two identical training networks with agents’ optimal and worst actions input, which improves the robustness of training policies, and (2) we propose an action perturbation technique based on gradient-descent to expand the selection range of actions, thereby improving training stability of our proposing algorithm. Finally, we conducted multiple sets of comparative experiments between our friend-or-foe multi-agent deep deterministic policy gradient and original one in four authoritative mixed cooperative–competitive scenarios. The results show that our improving algorithm can simultaneously improve the training stability and the robustness of agents’ generating policies in different complicated environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.