Abstract

This article studies the problem of decentralized optimization to minimize a finite-sum of convex cost functions over the nodes of a network where each cost function is further considered as the average of several constituent functions. Recalling the existing work, decentralized accelerated methods that consider improving both communication and computation efficiency have not yet been investigated. Based on this, we present an effective event-triggering decentralized accelerated stochastic gradient algorithm, namely, ET-DASG. ET-DASG leverages the event-triggering strategy for improving communication efficiency, the variance-reduction technique of SAGA for promoting computation efficiency, and the Nesterov's acceleration mechanism for the accelerated convergence. We provide a convergence analysis and show that ET-DASG with well-selected constant step-size can converge in the mean to the exact optimal solution. At the same time, linear convergence rate is achieved if each constituent function is strongly convex and smooth due to the adoption of gradient-tracking scheme. Under certain conditions, we prove that for each node the time interval between two successive triggering instants is larger than the iteration interval. Finally, simulation results also confirm the appealing performance of ET-DASG.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call