Abstract

We consider a decentralized stochastic optimization problem over a network of agents, modeled as a directed graph: Agents aim to asynchronously minimize the average of their individual losses (possibly non-convex), each one having access only to a noisy estimate of the gradient of its own function. We propose an asynchronous distributed algorithm for such a class of problems. The algorithm combines stochastic gradients with tracking in an asynchronous push-sum framework and obtains a sublinear convergence rate, matching the rate of the centralized stochastic gradient descent applied to the nonconvex minimization. Our experiments on a non-convex image classification task using convolutional neural network validate the convergence of our proposed algorithm across different number of nodes and graph connectivity percentages.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call