Abstract

This paper studies optimization problems over multi-agent systems, in which all agents cooperatively minimize a global objective function expressed as a sum of local cost functions. Each agent in the systems uses only local computation and communication in the overall process without leaking their private information. Based on the Barzilai–Borwein (BB) method and multi-consensus inner loops, a distributed algorithm with the availability of larger step-sizes and accelerated convergence, named as ADBB, is proposed. Moreover, owing to the employment of only row-stochastic weight matrices, ADBB can resolve the optimization problems over unbalanced directed networks without requiring the knowledge of neighbors’ out-degree for each agent. Via establishing contraction relationships between the consensus error, the optimality gap, and the gradient tracking error, ADBB is theoretically proved to converge linearly to the global optimal solution. A real-world data set is used in simulations to validate the correctness of the theoretical analysis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call