Abstract

Optimal motion planning involves obstacles avoidance whereas path planning is the key to success in optimal motion planning. Due to the computational demands, most of the path planning algorithms can not be employed for real-time-based applications. Model-based reinforcement learning approaches for path planning have received particular success in the recent past. Yet, most such approaches do not have deterministic output due to randomness. In this paper, we investigate existing reinforcement learning-based approaches for path planning and propose such an approach for path planning in the 3D environment. One such reinforcement learning-based approach is a deterministic tree-based approach, and the other two approaches are based on Q-learning and approximate policy gradient, respectively. We tested the preceding approaches on two different simulators, each of which consists of a set of random obstacles that can be changed or moved dynamically. After analysing the result and computation time, we concluded that the deterministic tree search approach provides highly stable results. However, the computational time is considerably higher than the other two approaches. Finally, the comparative results are provided in terms of accuracy and computational time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.