Abstract

This paper considers the distributed optimization problem whose global objective function is strongly convex, regardless of whether each local objective function is convex or not. We develop a distributed continuous-time version of the modified Newton method by using blended dynamics of a heterogeneous multi-agent system. We modify the Newton method to a second-order differential equation based on singular perturbation theory and decompose the modified equation into dynamics of agents. Each agent only knows its local objective function and collaboratively exchanges the output, which is a linear combination of the state and its time derivative. This method estimates the Newton direction without explicitly calculating the inverse of Hessian. The estimated Newton direction contains information about the global objective function. If the average Hessian is positive definite, even if some of the local Hessian is negative, this method can find the global minimum. Under the proposed algorithm, all agents converge to a near of the global minimum of the convex global objective function with a sufficiently large coupling gain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call