Abstract

We consider the problem of minimizing a finite sum of differentiable and nondifferentiable convex functions in the setting of finite-dimensional Euclidean space. We propose and analyze a distributed proximal gradient method with computational delays. The occurrence of local delays when computing local gradient of each differentiable cost function allows the use of out-of-date iterates when generating the next estimates, which benefits a situation where the cost of gradient computation is expensive so that it cannot be done within a limited time constraints. We provide a condition on control parameter to guarantee that the sequences generated by the proposed method converge to the unique solution. We finally illustrate the presented theoretical results by performing some numerical experiments on binary image classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call