Abstract

A novel computation-efficient quantized distributed optimization algorithm is presented in this article for solving a class of convex optimization problems over time-varying undirected networks with limited communication capacity. These convex optimization problems are usually relevant to the minimization of a sum of local convex objective functions using only local communication and local computation. In most of the existing distributed optimization algorithms, each agent needs to calculate the subgradient of its local convex objective function at each time step, which leads to extremely heavy computation. The proposed algorithm incorporates random sleep scheme into procedures of agents’ updates in a probabilistic form to reduce the computation load, and further allows for uncoordinated step-sizes of all agents. The quantized strategy is also applied, which overcomes the limitation of communication capacity. Theoretical analysis indicates that the convex optimization problems can be solved and numerical analysis shows that the computation load of subgradient can be significantly reduced by the proposed algorithm. The boundedness of the quantization levels at each time step has been explicitly characterized. Simulation examples are presented to demonstrate the effectiveness of the algorithm and the correctness of the theoretical results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call