Abstract

In this paper, an adaptive finite-time impedance control strategy based on optimised backstepping (OB) technique is proposed for robotic manipulators subject to state constraints. The existing OB methods approximately solve the intractable Hamilton-Jacobi-Bellman equation by reinforcement learning (RL) with Bellman residual error, which has intricate actor-critic updating laws and persistence excitation requirement. To overcome this drawback, we construct the simplified RL updating laws by converting the problem to the solution of a positive-definite function, which is composed of actor-critic network weights. Then, the simplified RL updating laws can significantly reduce the controller complexity and relax the persistence excitation. Based on the barrier Lyapunov function, a barrier-type performance index function is constructed for the optimised controller under state constraints. The finite-time stability theory can guarantee the finite-time convergence property of the closed loop system without violating the prescribed constraints. Finally, we demonstrate the effectiveness of the proposed method in the simulation example with environment-robot interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call