Abstract

This paper investigates a constrained distributed optimization problem enabled by differential privacy where the underlying network is time-changing with unbalanced digraphs. To solve such a problem, we first propose a differentially private online distributed algorithm by injecting adaptively adjustable Laplace noises. The proposed algorithm can not only protect the privacy of participants without compromising a trusted third party, but also be implemented on more general time-varying unbalanced digraphs. Under mild conditions, we then show that the proposed algorithm can achieve a sublinear expected bound of regret for general local convex objective functions. The result shows that there is a trade-off between the optimization accuracy and privacy level. Finally, numerical simulations are conducted to validate the efficiency of the proposed algorithm.

Highlights

  • Owing to high requirement of large scale and distributed data processing, distributed optimization (DO) is an attractive approach. e framework of DO permits multiple units to maintain their data unrevealed and optimize cooperatively a common optimization objective by local message exchanges and computations

  • Compared with classical DO in which the local objective functions are usually time-invariant, online distributed optimization (ODO) is applicable to the circumstances where local objective functions possibly change over time in some uncertain and even adversarial environments

  • Various ODO approaches have been increasingly developed in recent years [3, 4]

Read more

Summary

Introduction

Owing to high requirement of large scale and distributed data processing, distributed optimization (DO) is an attractive approach. e framework of DO permits multiple units to maintain their data unrevealed and optimize cooperatively a common optimization objective by local message exchanges and computations. Li et al in [8] extended online mirror descent distributed algorithm [9] to a more general setup, in which a primal-dual mirror descent distributed method was proposed for solving the constrained ODO problems over a time-changing network with the requirement that the weight matrices are doubly stochastic. Nedicand Olshevsky in [10] initially introduced extra computations and communications to overcome the imbalance of networks by learning a specific eigenvector and developed a push-sum-based algorithm By using both column-stochastic and row-stochastic weight matrices simultaneously, Pu et al [11] proposed an AB algorithm without learning the eigenvector. E recent work [21] proposed an ODO algorithm with DP over time-changing networks with weight-balancing digraphs This method is only suitable for unconstrained problems, requiring that each node has the knowledge of its outdegrees. P(x) and E(x) denote the probability distribution and expectation of a random variable x, respectively

Problem Formulation and Preliminaries
Differentially Private Distributed Online Algorithm
Main Results
Numerical Simulations
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call