Abstract

This paper presents a conceptual framework for addressing the consensus problem in multi-agent systems with unknown nonlinear dynamics using reinforcement learning (RL) based nearly optimal sliding mode controller (SMC). The agents’ dynamics are assumed to have uncertainty and mismatched disturbance. An adaptive fixed-time estimator is introduced to estimate uncertain dynamics and disturbances for each agent at a certain time. The paper proposes two control strategies. In the first strategy, a controller is designed, incorporating adaptive SMC and an optimal controller. Adaption law in SMC estimates the bound of fixed time estimator error before convergence, ultimately achieving asymptotic convergence to the sliding surface and converting the agent’s dynamics to linear. This enables solving a linear consensus problem using an RL-based adaptive optimal controller through an on-policy critic–actor method. The second control strategy enhances the adaptive SMC into a fixed-time controller, reducing the time to reach the sliding surface regardless of initial conditions. Consequently, the convergence time of the consensus error to zero is diminished. This reduction in reaching time results in faster convergence of the consensus error to zero. The effectiveness of both strategies is validated through numerical experiments on two real system models, aligning with the theoretical proofs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.