Abstract

This paper addresses the problem of cooperative control of a team of distributed agents with nonlinear discrete-time dynamics. Each agent is assumed to evolve in discrete-time based on locally computed control laws and by exchanging delayed state information with a subset of neighboring cooperating agents. The cooperative control problem is formulated in a receding-horizon (RH) framework, where the control laws depend on the local state variables (feedback action) and on delayed information gathered from cooperating neighboring agents (feedforward action). A rigorous stability analysis is carried out exploiting the stabilizing properties of the RH local control laws on one hand and input-to-state stability (ISS) arguments on the other hand. In particular, it is shown that, under suitable assumptions, each controlled agent is ISS under the action of the local control law. The stability of the team of agents is then proved by utilizing small-gain theorem results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call