Abstract

In this article, we examine a novel generic network cost minimization problem, in which every node has a local decision vector to optimize. Each node incurs a cost associated with its decision vector, while each link incurs a cost related to the decision vectors of its two end nodes. All nodes collaborate to minimize the overall network cost. The formulated network cost minimization problem has broad applications in distributed signal processing and control, in which the notion of link costs often arises. To solve this problem in a decentralized manner, we develop a distributed variant of Newton's method, which possesses faster convergence than alternative first-order optimization methods such as gradient descent and alternating direction method of multipliers. The proposed method is based on an appropriate splitting of the Hessian matrix and an approximation of its inverse, which is used to determine the Newton step. Global linear convergence of the proposed algorithm is established under several standard technical assumptions on the local cost functions. Furthermore, analogous to classical centralized Newton's method, a quadratic convergence phase of the algorithm over a certain time interval is identified. Finally, numerical simulations are conducted to validate the effectiveness of the proposed algorithm and its superiority over other first-order methods, especially when the cost functions are ill-conditioned. Complexity issues of the proposed distributed Newton's method and alternative first-order methods are also discussed.

Highlights

  • The advancement of decentralized signal processing and control in multiagent systems relies on the development of various distributed optimization methods

  • Inspired by the recent work on network Newton algorithm for decentralized consensus optimization [24], we develop a distributed variant of Newton’s method for the generic network cost minimization problem in this article

  • We show that Algorithm 1 possesses a quadratic convergence phase, which is a generic theoretical advantage of the second-order optimization methods over first-order ones [29], [33]

Read more

Summary

INTRODUCTION

The advancement of decentralized signal processing and control in multiagent systems relies on the development of various distributed optimization methods. Inspired by the recent work on network Newton algorithm for decentralized consensus optimization [24], we develop a distributed variant of Newton’s method for the generic network cost minimization problem in this article. We note that the matrix splitting based Newton-type methods has been proposed to solve different problems in prior works, for example [10] for NUM, [30] for network flow optimization, [24] for consensus optimization. None of these existing works consider the joint optimization of generic node/link cost functions, which are of interest in this article. Define ∇2φ(x, y) ∈ R(a+b)×(a+b) to be the complete Hessian matrix with respect to the joint vector [xT, yT]T

PROBLEM FORMULATION AND ALGORITHM DEVELOPMENT
Problem Formulation
Algorithm Development
CONVERGENCE ANALYSIS
The Global Linear Convergence
The Quadratic Convergence Phase
NUMERICAL TESTS
COMMUNICATION AND COMPUTATIONAL COMPLEXITY
Communication Complexity
Computational Complexity
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.