Abstract

Agent planning programs are finite-state programs, possibly containing loops, whose atomic instructions consist of a guard, a maintenance goal, and an achievement goal, which act as precondition-invariance-postcondition assertions in program specification. The execution of such programs requires generating plans that meet the goals specified in the atomic instructions, while respecting the program control flow. Recently, De Giacomo et al. (2016) presented a technique, based on iteratively solving classical planning problems with action costs, for realizing planning programs in deterministic domains. Such a technique works generally well for domains with no or very few dead-end states. In this paper, we propose an enhancement of this technique to handle deterministic domains that have potentially many dead-end states, and we study the effectiveness of our technique through an experimental analysis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.