Abstract
In this paper, an adaptive finite-time impedance control strategy based on optimised backstepping (OB) technique is proposed for robotic manipulators subject to state constraints. The existing OB methods approximately solve the intractable Hamilton-Jacobi-Bellman equation by reinforcement learning (RL) with Bellman residual error, which has intricate actor-critic updating laws and persistence excitation requirement. To overcome this drawback, we construct the simplified RL updating laws by converting the problem to the solution of a positive-definite function, which is composed of actor-critic network weights. Then, the simplified RL updating laws can significantly reduce the controller complexity and relax the persistence excitation. Based on the barrier Lyapunov function, a barrier-type performance index function is constructed for the optimised controller under state constraints. The finite-time stability theory can guarantee the finite-time convergence property of the closed loop system without violating the prescribed constraints. Finally, we demonstrate the effectiveness of the proposed method in the simulation example with environment-robot interaction.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.