Abstract
This brief presents a partially model-free solution to the distributed containment control of multiagent systems using off-policy reinforcement learning (RL). The followers are assumed to be heterogeneous with different dynamics, and the leaders are assumed to be active in the sense that their control inputs can be nonzero. Optimality is explicitly imposed in solving the containment problem to not only drive the agents’ states into a convex hull of the leaders’ states but also minimize their transient responses. Inhomogeneous algebraic Riccati equations (AREs) are derived to solve the optimal containment control with active leaders. The resulting control protocol for each agent depends on its own state and an estimation of an interior point inside the convex hull spanned by the leaders. This estimation is provided by designing a distributed observer for a trajectory inside the convex hull of active leaders. Only the knowledge of the leaders’ dynamics is required by the observer. An off-policy RL algorithm is developed to solve the inhomogeneous AREs online in real time without requiring any knowledge of the followers’ dynamics. Finally, a simulation example is presented to show the effectiveness of the presented algorithm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.