Abstract

The obstacles avoidance of manipulator is a hot issue in the field of robot control. Artificial Potential Field Method (APFM) is a widely used obstacles avoidance path planning method, which has prominent advantages. However, APFM also has some shortcomings, which include the inefficiency of avoiding obstacles close to target or dynamic obstacles. In view of the shortcomings of APFM, Reinforcement Learning (RL) only needs an automatic learning model to continuously improve itself in the specified environment, which makes it capable of optimizing APFM theoretically. In this paper, we introduce an approach hybridizing RL and APFM to solve those problems. We define the concepts of Distance reinforcement factors (DRF) and Force reinforcement factors (FRF) to make RL and APFM integrated more effectively. We disassemble the reward function of RL into two parts through DRF and FRF, and make them activate in different situations to optimize APFM. Our method can obtain better obstacles avoidance performance through finding the optimal strategy by RL, and the effectiveness of the proposed algorithm is verified by multiple sets of simulation experiments, comparative experiments and physical experiments in different types of obstacles. Our approach is superior to traditional APFM and the other improved APFM method in avoiding collisions and approaching obstacles avoidance. At the same time, physical experiments verify the practicality of the proposed algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.