Abstract

This paper considers a second-order multi-agent system for solving the non-smooth convex optimization problem, where the global objective function is a sum of local convex objective functions within different bound constraints over undirected graphs. A novel distributed continuous-time optimization algorithm is designed, where each agent only has an access to its own objective function and bound constraint. All the agents cooperatively minimize the global objective function under some mild conditions. In virtue of the KKT condition and the Lagrange multiplier method, the convergence of the resultant dynamical system is ensured by involving the Lyapunov stability theory and the hybrid LaSalle invariance principle of differential inclusion. A numerical example is conducted to verify the theoretical results.

Highlights

  • The distributed optimization of a sum of local convex functions has been widely investigated in a variety of scenarios in recent years

  • Numerous distributed optimization algorithms are designed to be in a discrete-time fashion to search the optimal solutions of the optimization problem in [3, 5, 8], while continuous-time strategies due to its relatively complete theoretical framework have been widely applied to the distributed optimization problems in [10,11,12]

  • What is worth mentioning is that Bianchi and Jakubowicz [26] presented a distributed constraint non-convex optimization algorithm which consists of two steps: a local stochastic gradient descent at each agent and a gossip step that drives the network of agents to a consensus

Read more

Summary

Introduction

The distributed optimization of a sum of local convex functions has been widely investigated in a variety of scenarios in recent years. Inspired by the works of [7, 10, 11, 19], a novel distributed second-order continuous-time multi-agent system is proposed to solve a distributed convex optimization problem, where the objective function is a sum of local objective functions, and each one can only know its local information. The proposed algorithm can solve the convex optimization problem with a sum of convex objective functions with local bound constraints. It does not require the objective function to be smooth, which is required in the most existing recurrent neural network algorithms.

Mathematical Preliminaries
Non-smooth Analysis
Problem Formulation and Optimization Algorithm
Simulation
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call