Abstract

Validating the performance of a knowledge-based system is a critical step in its commercialization process. Without exception, buyers of systems intended for serious purposes require a certain level of guarantees about system performance. This is particularly true for diagnostic systems. Yet, many problems exist in the validation process, especially as it applies to large knowledge-based systems. One of the biggest challenges facing the developer when validating the system's performance is knowing how much testing is sufficient to show that the system is valid. Exhaustive testing of the system is almost always impractical due to the many possible test cases that can be generated, many of which are not useful. It would thus be highly desirable to have a means of defining a representative set of test cases that, if executed correctly by the system, would provide a high confidence in the system's validity. This paper describes the experiences of the development team in validating the performance of a large commercial diagnostic knowledge-based system. The description covers the procedure employed to carry out this task, as well as the heuristic technique used for generating the representative set of test cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call