Abstract

The distributed online optimization (DOO) problem with privacy guarantees over an unbalanced directed graph is considered in this paper, where the cost function is explicitly unknown. To solve this problem, a distributed online one-point residual feedback optimization algorithm based on differential privacy is designed. For the objective function explicitly unknown, this algorithm employs the residual feedback to estimate the true gradient information. In addition, only the row stochastic adjacency matrix utilized releases the requirement of double stochastic weighting matrices, making the algorithm easier to implement on directed graphs. Theoretical results show that the algorithm not only protects the privacy information of the nodes but also achieves the same sublinear rate regret as the DOO algorithm based on two-point feedback. Moreover, our regret bound depends on weaker assumptions than the traditional DOO algorithm based on one-point feedback. Finally, the simulation results verify the effectiveness of the algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call