Abstract

Uncertainty, inherent in most real-world domains, can cause failure of apparently sound classical plans. On the other hand, reasoning with representations that explicitly reflect uncertainty can engender significant, even prohibitive, additional computational costs. This paper contributes a novel approach to planning in uncertain domains. The approach is an extension of classical planning. Machine learning is employed to adjust planner bias in response to execution failures. Thus, the classical planner is conditioned towards producing plans that tend to work when executed in the world. The planner's representations are simple and crisp; uncertainty is represented and reasoned about only during learning. The user-supplied domain theory is left intact. The operator definitions and the planner's projection ability remain as the domain expert intended them. Some structuring of the planner's bias space is required. But with suitable structuring the approach scales well. The learning converges using no more than a polynomial number of examples. The system then probabilistically guarantees that either the plans produced will achieve their goal when executed or that adequate planning is not possible with the domain theory provided. An implemented robotic system is described.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.