Abstract

We study in this paper several problems at the intersection of decentralized optimization and online learning. Decentralized optimization plays a vital role in machine learning and has recently garnered much attention due to its inherent advantage in handling edge computations. Many decentralized optimization algorithms, both projection and projection-free algorithms with theoretical guarantees, have been proposed in the literature, focusing mainly on offline settings. However, for most real-world machine learning problems, the data is often revealed online, for example, in the case of recommender systems, image/video processing, and stock portfolio management. Therefore, in this work, we study decentralized optimization within the framework of online settings with constraints imposed on the optimization solutions (e.g., sparsity or low rank of matrices). More specifically, we consider the problem of optimizing an aggregate of convex loss functions that arrive over time such that their components are distributed over a connected network. We present a consensus-based online decentralized Frank-Wolfe algorithm that uses stochastic gradient estimates, which achieves an asymptotically tight regret guarantee of O(T) where T is a given time horizon. Furthermore, we demonstrate the performance of this algorithm for optimizing the online multiclass logistic regression model on real-world standard image datasets (MNIST, CIFAR10) by comparing with centralized online algorithms. We achieve better regret bounds than the previously best-known decentralized constrained online algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call