Abstract

This article studies the problem of distributed optimization in multiagent systems where each agent seeks to minimize the sum of all agents’ objective functions using only local information. Under the requirement of security, each agent needs to keep its objective function private from other agents and potential eavesdroppers. We first prove the impossibility of guaranteeing convergence and differential privacy simultaneously by perturbing states in exact distributed optimization algorithms. Motivated by this result, we design a completely distributed algorithm, Distributed algorithm via Direction and State Perturbation (DiaDSP), that achieves differential privacy by perturbing both states and directions with decaying Laplace noise. Different from most of the existing works that require decaying stepsizes to ensure convergence, we show that our DiaDSP algorithm converges in mean and almost surely even with a constant stepsize. In particular, we prove linear convergence in mean by only assuming that the sum of all cost functions is strongly convex. The R-linear convergence is proved under the assumption of Lipschitz gradients instead of that of bounded gradients. The optimal stepsize for the fastest convergence rate is also established. Moreover, we describe the privacy properties and characterize the tradeoff between differential privacy and convergence accuracy. Simulations are conducted on a typical sensor fusion problem to validate the theoretical results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call