Abstract
Debugging is notoriously difficult and time consuming but also essential for ensuring the reliability and quality of a software system. In order to reduce debugging effort and enable automated failure detection, we propose an automated testing framework for detecting failures in cognitive agent programs. Our approach is based on the assumption that modules within such programs are a natural unit for testing. We identify a minimal set of temporal operators that enable the specification of test conditions and show that the test language is sufficiently expressive for detecting all failure types of existing failure taxonomy. We also introduce an approach for specifying test templates that supports a programmer in writing tests. Furthermore, empirical analysis of agent programs allows us to evaluate whether our approach using test templates adequately detects failures, and to determine the effort that is required to do so in both single and multi-agent systems. We also discuss a concrete implementation of the proposed framework for the GOAL agent programming language that has been developed for the Eclipse IDE. With the use of this framework, evaluations have been performed based on test files and according questionnaires that were handed in by 94 novice programmers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Agent-Oriented Software Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.