Abstract

In this article, we consider distributed composite optimization problems involving a common non-smooth regularization term over an undirected and connected network. Inspired by vast applications of this kind of problem in large-scale machine learning, the local cost function of each node is further set as an average of a certain amount of local cost subfunctions. For this scenario, most existing solutions based on the proximal method tend to ignore the cost of gradient evaluations, which results in degraded performance. We instead develop a distributed stochastic proximal-gradient algorithm to tackle the problems by employing the local unbiased stochastic averaging gradient method. At each iteration, only a single local cost subfunction is demanded by each node to evaluate the gradient, then the average of the latest stochastic gradients serves as the approximation of the true local gradient. An explicit linear convergence rate of the proposed algorithm is established with constant dual step-sizes for strongly convex local cost subfunctions with Lipschitz-continuous gradients. Furthermore, it is shown that, in the smooth cases, our simplified analysis technique can be extended to some notable primal-dual domain algorithms, such as DSA, EXTRA, and DIGing. Numerical experiments are presented to confirm the theoretical findings.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.