Given a sequence of observations over a plan execution, plan and goal recognition are considered as interchangeable tasks in AI planning. However, strictly speaking, the former tries to identify a plan, and the latter a set of goals, that explain the observations. Both recognition tasks are data-driven, where data comprises the plan observations, and are specially useful in proactive systems. Depending on the source of knowledge about the agents under observation, these tasks are traditionally solved by two different approaches, which require a large plan library or a planning model. In between these approaches, we propose a unified novel constraint-based approach, which distinguishes between the two tasks but is valid for both. We present a formulation, based on Partial Order Causal Link planning, that is compiled from a small plan library, to approximate a model that learns the essential causality of the original planning model. We deal with unreliable observations, which include missing and noisy observations on the real world. Modeling the observations in our formulation is straightforward. The use of the learned model allows us to address a data-driven optimization task to find the plans that most satisfy those observations (plan recognition) and the goals that are sufficiently supported by the causal relationships of the observations (goal recognition). We perform a complete evaluation of our approach in IPC domains under several indicators (accuracy, spread and ROC curves) with varying degrees of partial observability and noise on the observations. We also perform a comparison with other model-based approaches from literature.
Read full abstract