Abstract

AbstractIn this chapter, we focus on introducing and exploring how to improve the computational and computational efficiency in distributed optimization problems, and the problem under study remains the problem of distributed optimization to minimize a finite sum of convex cost functions over the nodes of a network where each cost function is further considered as the average of several constituent functions. Reviewing the existing work, no method can improve communication efficiency and computational efficiency simultaneously. To achieve the above goal, we will introduce an effective event-triggered distributed accelerated stochastic gradient algorithm, namely ET-DASG. ET-DASG can improve communication efficiency through an event-triggered strategy, improve computational efficiency by using SAGA’s variance-reduction technique, and accelerate convergence by using Nesterov’s acceleration mechanism, thus achieving the target of improving communication efficiency and computational efficiency simultaneously. Furthermore, we will provide in this chapter a convergence analysis that demonstrates that ET-DASG can converge to the exact optimal solution within the average value with a well-selected constant step-size. Also, thanks to the gradient tracking scheme, the algorithm can achieve linear convergence rates when each constituent function is strongly convex and smooth. Moreover, under certain conditions, we prove that the time interval between two successive trigger moments is larger than the iteration interval for each node. Finally, we also confirm the attractive performance of ET-DASG through simulation results.KeywordsDistributed optimizationStochastic algorithmEvent-triggeredVariance reductionNesterov’s acceleration

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call