Abstract

This study presents a new framework for merging the Adaptive Fuzzy Sliding-Mode Control (AFSMC) with an off-policy Reinforcement Learning (RL) algorithm to control nonlinear under-actuated agents. In particular, a near-optimal leader-follower consensus is considered, and a new method is proposed using the framework of graphical games. In the proposed technique, the sliding variables’ coefficients are considered adaptively tuned policies to achieve an optimal compromise between the satisfactory tracking performance and the allowable control efforts. Contrary to the conventional off-policy RL algorithms for consensus control of multi-agent systems, the proposed method does not require partial knowledge of the system dynamics to initialize the RL process. Furthermore, an actor-critic fuzzy methodology is employed to approximate optimal policies using the measured input/output data. Therefore, using the tuned sliding vector, the control input for each agent is generated which includes a fuzzy term, a robust term, and a saturation compensating term. In particular, the fuzzy system approximates a nonlinear function, and the robust part of the input compensates for any possible mismatches. Furthermore, the saturation compensating gain prevents instability due to any possible actuator saturation. Based on the local sliding variables, the fuzzy singletons, the bounds of the approximation errors, and the compensating gains are adaptively tuned. Closed-loop asymptotic stability is proved using the second Lyapunov theorem and Barbalat's lemma. The method's efficacy is verified by consensus control of multiple REMUS AUVs in the vertical plane.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call