Abstract

In this work, we study a generic network cost minimization problem, in which every node has a local decision vector to determine. Each node incurs a cost depending on its decision vector and each link also incurs a cost depending on the decision vectors of its two end nodes. All nodes cooperate to minimize the overall network cost. To obtain a decentralized algorithm for this problem, we resort to the distributed alternating direction method of multipliers (DADMM). However, each iteration of the DADMM involves solving a local optimization problem at each node, leading to intractable computational burden in many circumstances. As such, we propose a distributed linearized ADMM (DLADMM) algorithm, in which each iteration only involves closed-form computations and avoids local optimization problems. This greatly reduces the computational complexity and makes the proposed DLADMM amenable to devices with low computational capability, such as vastly deployed sensors, to which the computationally intensive traditional DADMM is not applicable. We prove that the DLADMM converges to an optimal point when the local cost functions are convex and have Lipschitz continuous gradients. Linear convergence rate of the DLADMM is also established if the local cost functions are further strongly convex. Numerical experiments are conducted to corroborate the effectiveness of the DLADMM and we observe that the DLADMM has similar convergence performance as DADMM does while the former enjoys much lower computational overhead.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call