Abstract

In this article, we study convex optimization problems where agents of a network cooperatively minimize the global objective function which consists of multiple local objective functions. The intention of this work is to solve large-scale optimization problems where the local objective function is complicated, and numerous. Different from most of the existing works, the local objective function of each agent is presented as the average of finite instantaneous functions. Integrating the gradient tracking algorithm with stochastic averaging gradient technology, a distributed stochastic gradient tracking (termed as S-DIGing) algorithm is proposed. At each time instant, only one randomly selected gradient of an instantaneous function is computed, and applied to approximate the local batch gradient for each agent. Based on a novel primal-dual interpretation of the S-DIGing algorithm, it is shown that the S-DIGing algorithm linearly converges to the global optimal solution when step-size do not exceed an explicit upper bound, and the instantaneous functions are strongly convex with Lipschitz continuous gradients. Numerical experiments are presented to demonstrate the practicability of the S-DIGing algorithm, and correctness of the theoretical results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call