Abstract

This article presents a novel adaptive controller for a small-size unmanned helicopter using the reinforcement learning (RL) control methodology. The helicopter is subject to system uncertainties and unknown external disturbances. The dynamic unmodeling uncertainties of the system are estimated online by the actor network, and the tracking performance function is optimized via the critic network. The estimation error of the actor-critic network and the external unknown disturbances are compensated via the nonlinear robust component based on the sliding mode control method. The stability of the closed-loop system and the asymptotic convergence of the attitude tracking error are proved via the Lyapunov-based stability analysis. Finally, real-time experiments are performed on a helicopter control testbed. The experimental results show that the proposed controller achieves good control performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call