Abstract

This paper investigates <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$H_\infty$</tex-math></inline-formula> consensus problem for discrete-time fractional-order multi-agent systems (DTFOMASs) with external disturbance in both state and output feedback controls. Based on the short-memory principle, the original DTFOMASs are transformed into classical discrete-time systems. Then, two consensus protocols with finite-dimensional memory for state and output feedback are proposed. By using the Bellman equation approach and recursive least-squares, two Q-learning (QL) algorithms for two-player zero-sum games (ZSG) to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$H_\infty$</tex-math></inline-formula> control are presented to learn the optimal feedback gain matrices without any information from the system dynamics and network topology. Furthermore, the DTFOMASs with external disturbance can achieve <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$H_\infty$</tex-math></inline-formula> state and output consensus under the two proposed protocols, respectively. In the end, two real scenarios of high-speed train running on the railway section of “Zhengzhou-Wuhan” are applied to check the validity of the proposed approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call