Abstract

In this article, we consider an online distributed composite optimization problem over a time-varying multiagent network that consists of multiple interacting nodes, where the objective function of each node consists of two parts: a loss function that changes over time and a regularization function. This problem naturally arises in many real-world applications ranging from wireless sensor networks to signal processing. We propose a class of online distributed optimization algorithms that are based on approximate mirror descent, which utilizes the Bregman divergence as a distance-measuring function that includes the Euclidean distances as a special case. We consider two standard information feedback models when designing the algorithms, that is, full-information feedback and bandit feedback. For the full-information feedback model, the first algorithm attains an average regularized regret of order O(1/√T) with the total number of rounds T. The second algorithm, which only requires the information of the values of the loss function at two predicted points instead of the gradient information, achieves the same average regularized regret as that of the first algorithm. Simulation results of a distributed online regularized linear regression problem are provided to illustrate the performance of the proposed algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call