For applying symbolic planning, there is the necessity of providing the specification of a symbolic action model, which is usually manually specified by a domain expert. However, such an encoding may be faulty due to either human errors or lack of domain knowledge. Therefore, learning the symbolic action model in an automated way has been widely adopted as an alternative to its manual specification. In this paper, we focus on the problem of learning action models offline, from an input set of partially observable plan traces. In particular, we propose an approach to: (i) augment the observability of a given plan trace by applying predefined logical rules; (ii) learn the preconditions and effects of each action in a plan trace from partial observations before and after the action execution. We formally prove that our approach learns action models with fundamental theoretical properties, not provided by other methods. We experimentally show that our approach outperforms a state-of-the-art method on a large set of existing benchmark domains. Furthermore, we compare the effectiveness of the learned action models for solving planning problems and show that the action models learned by our approach are much more effective w.r.t. a state-of-the-art method.1
Read full abstract