Abstract

Cable-driven parallel robots (CDPRs) have complex cable dynamics and working environment uncertainties, which bring challenges to the precise control of CDPRs. This paper introduces the reinforcement learning to offset the negative effect on control performance of CDPRs resulting from the uncertainties. The problem of controller design for CDPRs in the framework of deep reinforcement learning is investigated. A learning based control algorithm is proposed to compensate for uncertainties due to cable elasticity, mechanical friction, etc. A basic control law is given for the nominal model, and a Lyapunov-based deep reinforcement learning control law is designed. Moreover, the stability of the closed-loop tracking system under the reinforcement learning algorithm is proved. Both simulation and experiments validate the effectiveness and advantages of the proposed control algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call