Abstract

Distributed optimization is of essential importance in networked systems. Most of the existing distributed algorithms either assume the information exchange over undirected graphs, or require that the underlying directed network topology provides a doubly stochastic weight matrix to the agents. In this brief paper, a distributed subgradient-based algorithm is proposed to solve nonsmooth convex optimization problems. The algorithm applies to directed graphs without using a doubly stochastic weight matrix. Moreover, the algorithm is a distributed generalization and improvement of the quasi-monotone subgradient algorithm. An O(1∕k) convergence rate is achieved. The effectiveness of our algorithm is also illustrated by a numerical example.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call