Abstract

The continuous path of a manipulator is often discretized into a series of independent action poses during path tracking, and the inverse kinematic solution of the manipulator’s poses is computationally challenging and yields inconsistent results. This research suggests a manipulator-route-tracking method employing deep-reinforcement-learning techniques to deal with this problem. The method of this paper takes an end-to-end-learning approach for closed-loop control and eliminates the process of obtaining the inverse answer by converting the path-tracking task into a sequence-decision issue. This paper first explores the feasibility of deep reinforcement learning in tracking the path of the manipulator. After verifying the feasibility, the path tracking of the multi-degree-of-freedom (multi-DOF) manipulator was performed by combining the maximum-entropy deep-reinforcement-learning algorithm. The experimental findings demonstrate that the approach performs well in manipulator-path tracking, avoids the need for an inverse kinematic solution and a dynamics model, and is capable of performing manipulator-tracking control in continuous space. As a result, this paper proposes that the method presented is of great significance for research on manipulator-path tracking.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call