Abstract

How cooperation evolves is one of the fundamental research problems of multi-agent systems. With a deeper understanding of the forces that promote cooperation, we could increase the proportion of cooperative agents. However, existing methods for cultivating cooperation have two common limitations. Firstly, most do not take the privacy for the agents into account. Privacy preservation is an essential to the systems with multiple agents, because, without privacy preservation, an adversarial agent can use the private information of other agents to maximize its own payoff. Beyond the obvious black hat implication, this is also detrimental because maximizing one’s own payoff typically implies minimizing the payoffs of others, which tends to reduce the number of agents willing to cooperate. The second limitation is that most existing methods have a limited scope for generalization. Their performance is usually highly dependent on specific circumstances, e.g., the system topology, the initial proportion of cooperative agents, etc. To overcome these two drawbacks, we propose a novel method that combines differential privacy to protect an agent’s private information from adversaries with a neural network architecture to optimize the decision making power of agents in a wider range of situations. Through this joint implementation, each agent’s privacy is guaranteed, yet agents are still encouraged to cooperate even when adversaries are present. One notable application example is federated learning (FL) systems. In FL, clients can be incentivized by our method to actively cooperate with the central server by contributing high-quality model updates, while necessitating the utmost assurance of their data privacy protection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call