Abstract

With decentralized optimization having increased applications in various domains ranging from machine learning, control, to robotics, its privacy is also receiving increased attention. Existing privacy solutions for decentralized optimization achieve privacy by patching information-technology privacy mechanisms such as differential privacy or homomorphic encryption, which either sacrifices optimization accuracy or incurs heavy computation/communication overhead. We propose an inherently privacy-preserving decentralized optimization algorithm by exploiting the robustness of decentralized optimization dynamics. More specifically, we present a general decentralized optimization framework, based on which we show that the privacy of participating nodes’ gradients can be protected by adding randomness in optimization parameters. We further show that the added randomness has no influence on the accuracy of optimization, and prove that our inherently privacy-preserving algorithm has R-linear convergence when the global objective function is smooth and strongly convex. We also prove that the proposed algorithm can avoid the gradient of a node from being inferable by other nodes. Simulation results confirm the theoretical predictions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call