Abstract

Distributed optimization methods are powerful tools to deal with complex systems. However, the slow convergence rates of some widely used distributed methods have restricted their applications. In this paper, a dynamic weighted-gradient descent method is proposed to improve the convergence rate significantly, where the construction of the dynamic weighted matrix is the key of our distributed method. To form the matrix, the maximal differences of gradients between neighbor agents are calculated to derive entries of the matrix, which is an effective way to accelerate convergence while the equality constraint is always satisfied. Furthermore, the momentum terms based on the last updates of decision variables are also introduced, which reduce frequent change of gradients and speed up convergence rates further. Next, two propositions and a theorem are proved in the analysis of convergence part, which show the values of objective functions are monotonically decreasing to optima. Finally, simulations are carried out, and the results show that for a given accuracy, the number of iterations of our method are only one fourth of three widely used methods or even less. Additionally, applying our method, the optimal dispatches in a multi-microgird (MMG) system are achieved fast in a distributed manner, and the MMG can still work well, even if agents fail on communication networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call