Abstract

An experimental comparison of the effectiveness of the all-uses and all-edges test data adequacy criteria was performed. A large number of test sets was randomly generated for each of nine subject programs with subtle errors. For each test set, the percentages of (executable) edges and definition-use associations covered were measured and it was determined whether the test set exposed an error. Hypothesis testing was used to investigate whether all-uses adequate test sets are more likely to expose errors than are all-edges adequate test sets. All-uses was shown to be significantly more effective than all-edges for five of the subjects; moreover, for four of these, all-uses appeared to guarantee detection of the error. Further analysis showed that in four subjects, all-uses adequate test sets appeared to be more effective than all-edges adequate test sets of the same size. Logistic regression showed that in some, but not all of the subjects there was a strong positive correlation between the percentage of definition-use associations covered by a test set and its error-exposing ability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.