Abstract

Motion planning and its optimization is vital and difficult for redundant robot manipulator in an environment with obstacles. In this article, a general motion planning framework that integrates deep reinforcement learning (DRL) is proposed to explore the length-optimal path in Cartesian space and to derive the energy-optimal solution to inverse kinematics. First, based on the maximum entropy framework and Tsallis entropy, a DRL algorithm with clipped automatic entropy adjustment is proposed to make the agent to be qualified to cope with diverse tasks. Second, a path planning structure that combines traditional path planner and DRL is proposed, which integrates the powerful exploration capability of the former and exploitation of experience replay of the latter to enhance the planning performance. Third, based on the exploration ability of DRL and the nonlinear fitting ability of artificial neural networks, a structure is proposed to provide an energy-optimal inverse kinematics solution for redundant robot manipulators. Finally, experimental results on both simulated and real-world customized scenarios have verified the performance of the proposed work.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.