Abstract

AbstractIn this study, to address the issues of sluggish convergence and poor learning efficiency at the initial stages of training, the authors improve and optimise the Deep Deterministic Policy Gradient (DDPG) algorithm. First, inspired by the Artificial Potential Field method, the selection strategy of DDPG has been improved to accelerate the convergence speed during the early stages of training and reduce the time it takes for the mobile robot to reach the target point. Then, optimising the neural network structure of the DDPG algorithm based on the Long Short‐Term Memory accelerates the algorithm's convergence speed in complex dynamic scenes. Static and dynamic scene simulation experiments of mobile robots are carried out in ROS. Test findings demonstrate that the Artificial Potential Field method‐Long Short Term Memory Deep Deterministic Policy Gradient (APF‐LSTM DDPG) algorithm converges significantly faster in complex dynamic scenes. The success rate is improved by 7.3% and 3.6% in contrast to the DDPG and LSTM‐DDPG algorithms. Finally, the usefulness of the method provided in this study is similarly demonstrated in real situations using real mobile robot platforms, laying the foundation for the path planning of mobile robots in complex changing conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call