Abstract

Motion planning and its optimization is vital and difficult for redundant robot manipulator in an environment with obstacles. In this article, a general motion planning framework that integrates deep reinforcement learning (DRL) is proposed to explore the length-optimal path in Cartesian space and to derive the energy-optimal solution to inverse kinematics. First, based on the maximum entropy framework and Tsallis entropy, a DRL algorithm with clipped automatic entropy adjustment is proposed to make the agent to be qualified to cope with diverse tasks. Second, a path planning structure that combines traditional path planner and DRL is proposed, which integrates the powerful exploration capability of the former and exploitation of experience replay of the latter to enhance the planning performance. Third, based on the exploration ability of DRL and the nonlinear fitting ability of artificial neural networks, a structure is proposed to provide an energy-optimal inverse kinematics solution for redundant robot manipulators. Finally, experimental results on both simulated and real-world customized scenarios have verified the performance of the proposed work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call