Abstract

With the rapid development of science and technology and the continuous expansion of network scale, traditional centralized control and optimization techniques have been difficult to solve large-scale complex network problems, and distributed optimization algorithms with more robustness and flexibility have attracted more and more attention. In view of the irreplaceable advantages of multi-intelligent systems in distributed computing, many researchers have used them as carriers of distributed optimization for theoretical research and application promotion. For the distributed optimization problem in the undirected network environment, this article outlines and summarizes the existing gradient-based distributed simultaneous optimization algorithms, based on which a novel distributed simultaneous momentum acceleration optimization algorithm is proposed. Under the premise that the local objective function is strongly convex and Lipschitz continuous, the algorithm uses heterogeneous steps to precisely drive each node to converge asymptotically to the global optimal solution, and we reduce or compensate the effect of the original error on the pairwise error by introducing an adjustable symmetric matrix. The distributed resource allocation problem under undirected graphs is studied, in which the local cost function of the intelligences is unknown. Using sinusoidal signal excitation, the input and output of the cost function are utilized, and the first-order and second-order polar search algorithms are designed, respectively. In addition, the algorithm innovatively introduces the dual acceleration mechanism combining the Nesterov gradient method and the Heavy-Ball method to greatly improve the convergence speed of the algorithm. Based on the interrelationship of the agreement error, the optimal distance, the state difference, and the tracking error, the step size and the range of the momentum parameters for the linear convergence of the algorithm to the optimal solution are analyzed using the linear matrix inequality technique.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.