Abstract

Motion planning in an unknown environment is a common challenge because of the existing uncertainties. Representatively, the partially observable Markov decision process (POMDP) is a general mathematical framework for planning in uncertain environments. Recent POMDP solvers generally adopt the sparse reward scheme to solve the planning under uncertainty problem. Subsequently, the robot's exploration may be hindered without immediate rewards, resulting in excessively long planning time. In this article, a POMDP method information entropy determinized sparse partially observation tree (IE-DESPOT) is proposed to explore a high-quality solution and efficient planning in unknown environments. First, a novel sample method integrating state distribution and Gaussian distribution is proposed to optimize the quality of the sampled states. Then, an information entropy based on sampled states is established for real-time reward calculation, resulting in the improvement of robot exploration efficiency. Moreover, the near-optimality and convergence of the proposed algorithm are analyzed. As a result, compared with general-purpose POMDP solvers, the proposed algorithm exhibits fast convergence to a near-optimal policy in many examples of interest. Furthermore, the IE-DESPOT's performance is verified in real mobile robot experiments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.