This paper addresses the distributed adaptive optimization problem over second-order multi-agent networks (MANs) with nonuniform gradient gains. A general convex function consisting of a sum of local differentiable convex functions is chosen as the team objective function. First, based on the local information of each agent’s neighborhood, a novel distributed adaptive optimization algorithm with nonuniform gradient gains is designed, where these gains only have relations with agents’ own states. And then, the original closed-loop system is changed into an equivalent one by taking a coordination transformation. Moreover, it is proved that the states including positions and velocities of all agents are bounded by constructing a Lyapunov function provided that the initial values are given. By the theory of Lyapunov stability, it is shown that all agents can finally reach an agreement and their position states converge to the optimal solution of the team objective function asymptotically. Finally, the effectiveness of the obtained theoretical results is demonstrated by several simulation examples.
Read full abstract