Abstract

With the widespread application of distributed systems, many problems need to be solved urgently. How to design distributed optimization strategies has become a research hotspot. This article focuses on the solution rate of the distributed convex optimization algorithm. Each agent in the network has its own convex cost function. We consider a gradient-based distributed method and use a push-pull gradient algorithm to minimize the total cost function. Inspired by the current multi-agent consensus cooperation protocol for distributed convex optimization algorithm, a distributed convex optimization algorithm with finite time convergence is proposed and studied. In the end, based on a fixed undirected distributed network topology, a fast convergent distributed cooperative learning method based on a linear parameterized neural network is proposed, which is different from the existing distributed convex optimization algorithms that can achieve exponential convergence. The algorithm can achieve finite-time convergence. The convergence of the algorithm can be guaranteed by the Lyapunov method. The corresponding simulation examples also show the effectiveness of the algorithm intuitively. Compared with other algorithms, this algorithm is competitive.

Highlights

  • In the end, based on a fixed undirected distributed network topology, a fast convergent distributed cooperative learning method based on a linear parameterized neural network is proposed, which is different from the existing distributed convex optimization algorithms that can achieve exponential convergence

  • The proposed distributed convex optimization algorithm can clearly give the upper bound of the convergence time, which is closely related to the initial state of the algorithm, the algorithm parameters, and the network topology graph

  • We study the distributed optimization problem on the network

Read more

Summary

Introduction

In [2], the authors propose a new event-driven zero-gradient and algorithm that can be widely applied to most network models It can achieve exponential convergence when the network topology is strongly connected and is a detail balance graph. In [13], the author pointed out that the algorithm can achieve exponential convergence when the local cost function of the node is strongly convex and the gradient meets the global Lipschitz continuity condition. Some effective methods have been studied to improve the speed of consensus convergence, for example, by designing optimal topology and optimal communication weights [17] [18] [19] [20] [21] These consensus algorithms have fast convergence speed, they cannot solve the problem in a limited time (1-1).

Summary
Major Outcomes
Organization of the Paper
Notation
Push-Pull Gradient Method
Detailed Push-Pull Gradient Method
Unify Different Distributed Computing Architecture Systems
Proof of Convergence
Finite-Time Convergence Algorithm
Algorithm Introduction
Convergence Analysis
Simulation
Push-Pull Fast Convergent Distributed Cooperative Learning Algorithm
Fast Convergent Distributed Algorithm
Fast Convergent Discrete-Time Distributed Cooperative Learning Algorithm
Two Types of Discrete Distributed Cooperative Learning Methods
Distributed Cooperative Learning Algorithm Based on Zero-Gradient Sum
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.