Abstract

This paper studies the problem of optimization in multi-agent systems where each agent seeks to minimize the sum of all agents' objective functions without knowing others' functions. Under the requirement of privacy, each of them needs to keep its objective function private from other agents and potential attackers. We design a completely distributed algorithm, which achieves differential privacy by perturbing states and adjusting directions with decaying Laplace noise. The proposed algorithm ensures that an attacker who intercepts the messages cannot obtain the objective function of any agent even if it bribes all other agents. A constant stepsize is adopted to improve the convergence rate. It is shown that the algorithm converges almost surely and the convergence point is independent of the noise added to the states. The trade-off between differential privacy and convergence accuracy is also characterized. Finally, simulations are conducted to validate the efficiency of the proposed algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call