Abstract

Autonomous exploration systems require planning under uncertainty. Markov Decision Processes provide a classical framework based on an enumerated and unstructured state space representation. Recent works propose more compact and structured approaches in probabilistic planning. We present and discuss factorization techniques using state variables, and then decomposition techniques using sub-regions. A novel hybrid approach combining both is proposed that is well-fitted to probabilistic exploration-like planning: we use decomposition techniques to generate navigation local policies that are then included in a factored MDP. We discuss results obtained on the basis of probabilistic exploration-like planning problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call