Abstract

In this paper, we solve the convex distributed optimization problems, which include unconstrained optimization and a special constrained problem commonly known as a resource allocation problem, over a network of agents among which the communication can be represented by directed graphs (digraphs), by using the finite-time consensus-based and dual-based first-order gradient descent (GD) techniques. The key point is that a special consensus matrix is utilized for problem reformulation to make our dual-based algorithm suitable for digraphs. By the property of distributed finite-time exact (not approximate) consensus, the classical centralized optimization techniques (e.g., Nesterov accelerated GD) can be embedded into our dual-based algorithm conveniently, which means our distributed algorithm can inherit performance of classical centralized algorithms that has been proved to have optimal convergence performance. As a result, our proposed algorithm has faster convergence rate related to the optimization iteration number compared with other distributed optimization algorithms in literature. Since there are finite consensus communication steps inside each consensus process, when the time needed to communicate values between two neighbors is less than a threshold of the time needed to perform local computations, our proposed algorithm is also faster related to the time, as demonstrated in the simulations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.