Abstract

We consider a decentralized online convex optimization problem in a network of agents, where each agent controls only a coordinate (or a part) of the global decision vector. For such a problem, we propose two decentralized stochastic variants ( $\mathsf{SODA}\hbox{-}\mathsf{C}$ and $\mathsf{SODA}\hbox{-}\mathsf{PS}$ ) of Nesterov's dual averaging method $(\mathsf{DA})$ , where each agent only uses a coordinate of the noise-corrupted gradient in the dual-averaging step. We show that the expected regret bounds for both algorithms have sublinear growth of $O(\sqrt{T})$ , with the time horizon $T$ , in scenarios when the underlying communication topology is time-varying. The sublinear regret can be obtained when the stepsize is of the form $1/\sqrt{t}$ and the objective functions are Lipschitz-continuous convex functions with Lipschitz gradients, and the variance of the noisy gradients is bounded. We also provide simulation results of the proposed algorithms on sensor networks to complement our theoretical analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call