Abstract

This paper studies an event-triggered consensus-based distributed algorithm for tackling a kind of convex optimization problems, with a set of agents whose communication topology is described by an undirected and connected network. Each local objective function is held privately by each agent, and the target of each agent is to jointly minimize the total of its private objective functions. The communication process among agents in the network is driven by trigger condition in which each agent can only interact with its neighboring agents at some independent event-triggered sampling time instants. A double stochastic mixing matrix is employed in the algorithm which exactly steers all the agents to a global and consensual optimal solution even under uncoordinated step-sizes. Under two fairly standard assumptions that the objective functions are strongly convex and smoothness, we illustrate that the proposed algorithm successfully converges to the consensus and optimal solution at a geometric rate if only the uncoordinated step-sizes do not exceed some upper bound. We also prove that the convergence rate is able to be explicitly characterized. The Zeno-like behavior is rigorously excluded, that is, the difference between any two successive sampling time instants of each agent is at least two, reducing the communication cost by at least one half compared with traditional methods. Finally, the efficacy of the distributed algorithm and feasibility of our theoretical analysis are demonstrated by applying the distributed algorithm to a numerical experiment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call