Abstract

This paper presents a distributed optimal model reference adaptive control approach for solving containment control of heterogeneous multi-agent systems (MASs) with non-autonomous leaders. First, a fully distributed adaptive observer is designed to provide for each agent the desired reference trajectory by estimating the convex hull spanned by leaders. The distributed observer dynamics serves as a reference model for each follower to synchronize. The global communication graph information or the leader dynamics is not required to design the observer. In contrast to existing model reference adaptive controllers (MRAC) for single-agent systems and containment control solutions for MASs, the proposed MRAC approach imposes optimality and presents a distributed adaptive optimal solution to the containment control problem. To impose optimality, a performance function is defined based on the adaptive observers’ states as well as the followers’ local measurements. It is shown that considering non-autonomous leaders in this optimal control problem leads to solving inhomogeneous algebraic Riccati equations (AREs), instead of normal AREs in standard optimal control problems. To obviate the requirement of knowing the agents’ dynamics, an off-policy reinforcement learning approach implemented on an actor-critic structure is utilized for solving the inhomogeneous ARE. A simulation example is conducted to illustrate the effectiveness of the presented method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call