Abstract

Parallel and distributed stochastic algorithms have drawn significant attention in the realm of big data and machine learning. While the synchronous versions of these algorithms are well understood in terms of their convergence, the convergence analyses of their asynchronous counterparts are not widely studied. In this paper, we propose and analyze a distributed, asynchronous parallel algorithm to solve an arbitrary, consistent linear system by reformulating the system into a stochastic optimization problem as Richtárik and Takác̃ in [1]. We compare the convergence rates of our asynchronous algorithm with the synchronous parallel algorithm proposed by Richtárik and Takáč in [1] under different choices of the hyperparameters—the stepsize, the damping factor, the number of processors, and the delay factor. We show that our asynchronous parallel algorithm enjoys a global linear convergence rate, similar to the synchronous parallel algorithm in [1] under the same setup. We also show that our asynchronous algorithm improves upon the synchronous algorithm in [1] with a better convergence rate when the number of processors is larger than four. Furthermore, our asynchronous parallel algorithm performs asymptotically better than its synchronous counterpart for certain linear systems. Finally, we compute the minimum number of processors required for those systems for which our asynchronous algorithm is better and find that this number can be as low as two for some ill-conditioned problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call