Abstract

Motion planning aims to solve that the manipulator can find the path from the initial position to the target position under the condition of satisfying the constraints. Traditional motion planning methods usually transform the problem into finding the optimal solution of objective function under constraint conditions or some sampling-based method, but these methods can’t cope well with the unstructured and incomplete observation environment. In recent years, reinforcement learning has proved to be a promising direction to solve this problem, but sports planning based on reinforcement learning also faces many problems, such as low efficiency of data utilization, dimension disaster, and insufficient robustness. We try to compress the dimensions of the motion space. Instead of using the manipulator directly as the object of the motion planning policy output action, consider the action output of each joint motor locally. We propose an approach in which each joint of the manipulator is modeled as an agent, and use multiagent centralized training and decentralized execution multiagent reinforcement learning method to plan the motion of the manipulator. The experiment shows that we can greatly shorten the training time of the motion planning task and obtain better positioning accuracy and environmental robustness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call