Abstract

With the rapid changes in application scenarios, the robotic arm's motion planning function is playing an increasingly important role. The traditional demonstration motion planning method of the robotic arm cannot be carried out quickly. The use of reinforcement learning algorithms to solve motion planning problems is a new research trend that has emerged in recent years. However, reinforcement learning algorithms are difficult to converge quickly in some complex tasks. This leads to inefficient and difficult training problems in actual training. This paper proposes a robotic arm motion planning method based on curriculum reinforcement learning. This method adopts the concept of obstacle effective sphere to simplify obstacles in the environment. According to the reinforcement learning agent's real-time motion planning ability, the size of the effective sphere radius of the obstacle is adaptively adjusted so that the agent can train in an environment that matches its ability. The agent can first be trained in a simple environment and then gradually transition to a complete obstacle environment. The experiment in a virtual environment shows that this method can successfully perform motion planning. Comparing this method with the training effect of using only the PPO algorithm shows that this algorithm can effectively improve the efficiency of reinforcement learning training and reduce algorithm convergence difficulty.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call