Automated planning in AI and the logics of knowing how have close connections. In the recent literature, various planning-based know-how logics have been proposed and studied, making use of several notions of planning in AI. In this paper, we explore the reverse direction by using a multi-agent logic of knowing how to do know-how-based planning via model checking and theorem proving/satisfiability checking. Based on our logical framework, we propose two new classes of related planning problems: higher-order epistemic planning and meta-level epistemic planning, which generalize the current genre of epistemic planning in the literature. The former is for planning about planning, i.e., planning with higher-order goals that are again about epistemic planning, e.g., finding a plan for an agent to make sure p such that the adversary does not know how to make p false in the future. The latter is about planning at the meta-level by abstract reasoning combining knowledge-how from different agents, e.g., given that i knows how to prove a lemma and i knows j knows how to prove the theorem once the lemma is proved, we should derive that i knows how to let j knows how to prove the theorem. To make these possible, our framework features not only the operators of know-that and know-how but also a temporal operator □, which can help in capturing both the local and global knowledge-how. We axiomatize this powerful logic over finite models with perfect recall and show its decidability. We also give a PTIME algorithm for the model checking problem over finite models.
Read full abstract