Abstract

In this article, we propose two model-free algorithms using state or output feedback for saturated discrete-time multiagent systems (SDTMASs) to attain global containment control. In most previous works, the control input can avoid saturation by utilizing the low gain feedback (LGF) method whereas requiring the knowledge of agent dynamics, and SDTMASs just can attain semi-global containment control. Distinct with the previous works, first, based on the <inline-formula> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula>-learning (QL) technique, this article defines a <inline-formula> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula>-function and deduces the corresponding QL Bellman equation, which is the most important part of the QL algorithm. Then, in order to solve the QL Bellman equation, we propose two iterative model-free algorithms using state and output feedback, and the LGF matrix can be acquired from that solution directly. Furthermore, under the state and output feedback control protocols with the feedback matrices obtained from the proposed model-free algorithms, the SDTMASs can achieve global containment control instead of semi-global containment control. Finally, we present some simulations to confirm the validity of the proposed algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.