Abstract

Web applications are exposed to frequent changes both in requirements and involved technologies. At the same time, there is a continuously growing demand for quality and trust and such a fast evolution and quality constraints claim for mechanisms and techniques for automated testing. Web application automated testing often involves random crawlers to navigate the application under test and automatically explore its structure. However, owing to the specific challenges of the modern Web systems, automatic crawlers may leave large portions of the application unexplored. In this paper, we propose the use of structural metrics to predict whether an automatic crawler with given crawling capabilities will be sufficient or not to achieve high coverage of the application under test. In this work, we define a taxonomy of such capabilities and we determine which combination of them is expected to give the highest reward in terms of coverage increase. Our proposal is supported by an experiment in which 19 web applications have been analyzed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call