Abstract

PurposeThe purpose of this paper is using a model-free reinforcement learning (RL) algorithm to optimize manipulability which can overcome difficulties of dilemmas of matrix inversion, complicated formula transformation and expensive calculation time.Design/methodology/approachManipulability optimization is an effective way to solve the singularity problem arising in manipulator control. Some control schemes are proposed to optimize the manipulability during trajectory tracking, but they involve the dilemmas of matrix inversion, complicated formula transformation and expensive calculation time.FindingsThe redundant manipulator trained by RL can adjust its configuration in real-time to optimize the manipulability in an inverse-free manner while tracking the desired trajectory. Computer simulations and physics experiments demonstrate that compared with the existing methods, the average manipulability is increased by 58.9%, and the calculation time is reduced to 17.9%. Therefore, the proposed method effectively optimizes the manipulability, and the calculation time is significantly shortened.Originality/valueTo the best of the authors’ knowledge, this is the first method to optimize manipulability using RL during trajectory tracking. The authors compare their approach to existing singularity avoidance and manipulability maximization techniques, and prove that their method has better optimization effects and less computing time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call