Abstract

This paper focuses on solving the problem of composite constrained convex optimization with a sum of smooth convex functions and non-smooth regularization terms (ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> norm) subject to locally general constraints. Motivated by the modern large-scale information processing problems in machine learning (the samples of a training dataset are randomly decentralized across multiple computing nodes), each of the smooth objective functions is further considered as the average of several constituent functions. To address the problem in a decentralized fashion, we propose a novel computation-efficient decentralized stochastic gradient algorithm, which leverages the variance reduction technique and the decentralized stochastic gradient projection method with constant step-size. Theoretical analysis indicates that if the constant step-size is less than an explicitly estimated upper bound, the proposed algorithm can find the exact optimal solution in expectation when each constituent function (smooth) is strongly convex. Concerning the existing decentralized schemes, the proposed algorithm not only is suitable for solving the general constrained optimization problems but also possesses low computation cost in terms of the total number of local gradient evaluations. Furthermore, the proposed algorithm via differential privacy strategy can effectively mask the privacy of each constituent function, which is more practical in applications involving sensitive messages, such as military affairs or medical treatment. Finally, numerical evidence is provided to demonstrate the appealing performance of the proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call