ABSTRACTAutonomous systems such as space‐ or underwater‐exploration robots or elderly people assistance robots often include an artificial intelligence (AI) planner as a component. Starting from the initial state of a system, an AI planner automatically generates sequential plans to reach final states that satisfy user‐specified goals. Generating plans having a minimum number of intermediate steps or taking the least time to execute is usually strongly desired, as these plans exhibit minimal costs. Unfortunately, testing if an AI planner generates optimal plans is almost impossible because the expected cost of these plans is usually unknown. Based on mutation adequacy test suite selection, this article proposes a novel metamorphic testing framework for detecting the lack of optimality in AI planners. The general idea is to perform a systematic but non‐exhaustive state space exploration from the initial state and to select mutant‐adequate states to instantiate new planning tasks as follow‐up test cases. We then check a metamorphic relation between the automatically generated solutions of the AI planner for these new test cases and the cost of the initial plan. We implemented this metamorphic testing framework in a tool called MorphinPlan. Our experimental evaluation shows that MorphinPlan can detect non‐optimal behaviour in both mutated AI planners and off‐the‐shelf, configurable planners. It also shows that our proposed mutation adequacy test selection strategy outperforms three alternative test generation and selection strategies, including both random state selection and random walks through the state space in terms of mutation scores.
Read full abstract