AbstractIn cable‐driven parallel robots (CDPRs) practical applications, external disturbances inevitably influence the actuator performance, leading to low positioning accuracy of end‐effector (EE), and even system crashes. This paper addresses a disturbance observer‐based control scheme using deep reinforcement learning (RL) to suppress the effect of external disturbances. A controller via the non‐singular terminal sliding mode (NTSM) is proposed to enhance the system robustness. The estimation of the disturbance is investigated, and the deep RL is employed to enhance the model‐based disturbance observer, namely, the RLDO, and a corresponding controller via NTSM (RLDO‐NTSMC) is proposed. Simulations and experimental results validate that the RLDO effectively improves the estimation accuracy, and the RLDO‐NTSMC significantly increases the control accuracy of CDPRs. The root mean square errors of the EE's positioning errors controlled by the RLDO‐NTSMC decreased by over 93% compared with the classical augmented proportional–derivative (APD) controller.
Read full abstract