Abstract

Test oracles differentiate between the correct and incorrect system behavior. Automation of test oracles for visual output systems mainly involves image comparison, where a snapshot of the output is compared with respect to a reference image. Hereby, the captured snapshot can be subject to variations such as scaling and shifting. These variations lead to incorrect evaluations. Existing approaches employ computer vision techniques to address a specific set of variations. In this article, we introduce ADVISOR, an adjustable framework for test oracle automation of visual output systems. It allows the use of a flexible combination and configuration of computer vision techniques. We evaluated a set of valid configurations with a benchmark dataset collected during the tests of commercial digital TV systems. Some of these configurations achieved up to 3% better overall accuracy with respect to state-of-the-art tools. Further, we observed that there is no configuration that reaches the best accuracy for all types of image variations. We also empirically investigated the impact of significant parameters. One of them is a threshold regarding image matching score that determines the final verdict. This parameter is automatically tuned by offline training. We evaluated runtime performance as well. Results showed that differences among the ADVISOR configurations and state-of-the-art tools are in the order of seconds per image comparison.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call