Abstract

Knowledge-based engineering and computational intelligence are expected to become core technologies in the design and manufacturing for the next generation of space exploration missions. The literature is contradictory on how we are to assess such systems. Studies indicate significant disagreement regarding the amount of testing needed for system assessment. The sizes of standard black-box test suites are impractically large since the black-box approach neglects the internal structure of knowledge-based systems. On the contrary, practical results repeatedly indicate that only a few tests are needed to sample the range of behaviors of a knowledge-based program. In this paper, we model testing as a search process over the internal state space of the knowledge-based system. When comparing different test suites, the test suite that examines larger portion of the state space is considered more complete. Our goal is to investigate the trade-off between the completeness criterion and the size of test suites. The results of testing experiment on tens of thousands of mutants of real-world knowledge based systems indicate that a very limited gain in completeness can be achieved through prolonged testing. The use of simple (or random) search strategies for testing appears to be as powerful as testing by more thorough search algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.