Abstract

This article investigates the convex optimization problem with general local constraints including $N$ equality constraints, $N$ inequality constraints, and $N$ closed convex set constraints. Especially, the objective function of the considered problem is the sum of $N$ convex functions, which are seen as local objective functions. First, for the distributed setting, a weight-unbalanced digraph with $N$ nodes is introduced to describe the communication topology. Clearly, in the distributed setting, each node is responsible for a local objective function accompanied with an equality constraint, an inequality constraint, and a closed convex set constraint, which are viewed as local constraints. The aim of this article is to solve the considered convex optimization problem in a distributed manner, i.e., to lead the states of all nodes to a common optimal solution of the considered problem under the condition that each node only knows its own local objective function, and local constraints, and the information of its neighbors’ states. Towards this end, by resorting to the exact penalty function method and the gradient descent-like method, two distinct distributed discrete-time algorithms are developed for two different cases, respectively. Furthermore, the convergence properties of the designed algorithms are rigorously analyzed under some common and standard assumptions, and the convergence rates are described in detail. Finally, simulation results are provided to verify the theoretical results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call