Abstract

An artificial intelligence system, designed for operations in a real-world environment faces a nearly infinite set of possible performance scenarios. Designers and developers, thus, face the challenge of validating proper performance across both foreseen and unforeseen conditions, particularly when the artificial intelligence is controlling a robot that will be operating in close proximity, or may represent a danger, to humans. While the manual creation of test cases allows limited testing (perhaps ensuring that a set of foreseeable conditions trigger an appropriate response), this may be insufficient to fully characterize and validate safe system performance. An approach to validating the performance of an artificial intelligence system using a simple artificial intelligence test case producer (AITCP) is presented. The AITCP allows the creation and simulation of prospective operating scenarios at a rate far exceeding that possible by human testers. Four scenarios for testing an autonomous navigation control system are presented: single actor in two-dimensional space, multiple actors in two-dimensional space, single actor in three-dimensional space, and multiple actors in three-dimensional space. The utility of using the AITCP is compared to that of human testers in each of these scenarios.

Highlights

  • Validation of the safe performance of an artificial intelligence system (AIS), which operates in close proximity to humans or which could prospectively injure humans through its failure or maloperation is an integral part of the system testing process

  • The work presented demonstrates that an artificial intelligence test case producer can be utilized to effectively test artificial intelligence systems for both surface and airborne robots

  • The testing demonstrated the ability of the AI test routine (AITR) to identify bugs in an AI system under test (AISUT)

Read more

Summary

Introduction

Validation of the safe performance of an artificial intelligence system (AIS), which operates in close proximity to humans or which could prospectively injure humans through its failure or maloperation is an integral part of the system testing process. The system will work as desired in a real-world environment and (2) that the system will perform safely or effectively under the multitude of prospective operating conditions that it may encounter These can be effectively validated via the creation and implementation of numerous test cases. Felgenbaum [4], for example, reviewed artificial intelligence systems designed for diagnosis based on medical case studies and concluded that the modularity of the “Situation → Action” technique allowed for rules to be changed or added as the expert’s knowledge of the domain grew. This allowed more advanced cases to be used for validation

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call