Abstract

In recent years, significant progress has been made in the field of distributed optimization algorithms. This study focused on the distributed convex optimization problem over an undirected network. The target was to minimize the average of all local objective functions known by each agent while each agent communicates necessary information only with its neighbors. Based on the state-of-the-art algorithm, we proposed a novel distributed optimization algorithm, when the objective function of each agent satisfies smoothness and strong convexity. Faster convergence can be attained by utilizing Nesterov and Heavy-ball accelerated methods simultaneously, making the algorithm widely applicable to many large-scale distributed tasks. Meanwhile, the step-sizes and accelerated momentum coefficients are designed as uncoordinate, time-varying, and nonidentical, which can make the algorithm adapt to a wide range of application scenarios. Under some necessary assumptions and conditions, through rigorous theoretical analysis, a linear convergence rate was achieved. Finally, the numerical experiments over a real dataset demonstrate the superiority and efficacy of the novel algorithm compared to similar algorithms.

Highlights

  • Statement of Contributions: Throughout this article, we mainly focus on the application of distributed convex optimization method over an undirected network

  • It was mainly applied to handle the distributed optimization convex problem in an undirected network, where all agents are in an effort to optimize the average of all local objective functions collaboratively

  • When the largest step-size and the maximum coefficient do not exceed some estimated upper bounds, which have been provided in Theorem 1, the convergence rate of UGNH is linear under the condition that each local objective function is smooth and strongly convex

Read more

Summary

Introduction

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Uncoordinated step-sizes for different agents are required rather than the same constant stepsizes This situation was first studied in [36], in which an augmented distributed-gradient method was proposed, but it converged sublinearly. With a more relaxed step-size and network topolog, a distributed primal-dual optimization method in [38] was proposed by utilizing time-varying step-sizes, which was proved to converge linearly. With the help of Nesterov [39] and Heavy-ball [40] accelerated methods, the convergence rate of distributed optimization algorithms can be improved. We propose a novel distributed optimization algorithm with uncoordinated, time-varying, and nonidentical step-sizes and accelerated momentum terms, which has a faster linear convergence rate and can apply to more scenarios. Let ∇ f ( x ) : Rm → Rm denote the gradient of f ( x ) at x

Problem Formulation
Assumptions
Algorithm Development
Related Algorithms
Distributed Accelerated Methods
The Proposed Algorithm
Convergence Analysis
Supporting Lemmas
Main Results
Numerical Experiments
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.