Abstract

Debugging is notoriously difficult and time consuming but also essential for ensuring the reliability and quality of a software system. In order to reduce debugging effort and enable automated failure detection, we propose an automated testing framework for detecting failures in cognitive agent programs. Our approach is based on the assumption that modules within such programs are a natural unit for testing. We identify a minimal set of temporal operators that enable the specification of test conditions and show that the test language is sufficiently expressive for detecting all failure types of existing failure taxonomy. We also introduce an approach for specifying test templates that supports a programmer in writing tests. Furthermore, empirical analysis of agent programs allows us to evaluate whether our approach using test templates adequately detects failures, and to determine the effort that is required to do so in both single and multi-agent systems. We also discuss a concrete implementation of the proposed framework for the GOAL agent programming language that has been developed for the Eclipse IDE. With the use of this framework, evaluations have been performed based on test files and according questionnaires that were handed in by 94 novice programmers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call