Abstract

This paper investigates a model-free reinforcement learning-based approach that enables the quadruped robot to manipulate objects while maintaining its balance and dynamic stability during walking. At first, the dynamics of quadruped robots in two sub-spaces, position control space and force control space, are developed. Then, a new long-term performance index is introduced, and a radial basis function neural network as a critic network is presented to estimate the unobtainable long-term performance index. Based on the exported reinforcement signal, the actor neural network is introduced to generate the feedforward compensation term to cope with the nonlinear dynamics and the system uncertainties. The robustness of the actor-critic reinforcement learning algorithm is enhanced by using a fractional-order sliding-mode controller in the closed-loop system. The online adaptive laws for both the critic and actor-network weights are obtained using the Lyapunov stability theory. As a result, the uniformly ultimately boundedness of the position and the force tracking errors are proven. Finally, numerical simulations are conducted to illustrate the feasibility and effectiveness of the proposed adaptive actor-critic learning-based control scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call