Abstract

This paper proposes a penalty alternating direction method of multipliers (ADMM) to minimize the summation of convex composite functions over a decentralized network. Each agent in the network holds a private function consisting of a smooth part and a nonsmooth part, and can only exchange information with its neighbors during the optimization process. We consider a penalized approximation of the decentralized optimization problem; but unlike the existing penalty methods, here the penalty parameter can be very small such that the approximation error is negligible. On the other hand, the small penalty parameter makes the penalized objective ill-conditioned, such that the popular proximal gradient descent method has to use a small step size, and is hence slow. To address this issue, we propose to solve the penalized formulation with ADMM. We further utilize the composite structures of the private functions through linearizing the smooth parts so as to reduce computational costs, and handling the nonsmooth parts with proximal operators. The proposed penalty ADMM (abbreviated as PAD) is provably convergent when the private functions are convex, and linearly convergent when the smooth parts are further strongly convex. Numerical experiments corroborate the theoretical analyses, and demonstrate the advantages of PAD over existing state-of-the-art algorithms, such as DL-ADMM, PG-EXTRA and NIDS.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call