Abstract

ContextMany organizations see software test automation as a solution to decrease testing costs and to reduce cycle time in software development. However, establishment of automated testing may fail if test automation is not applied in the right time, right context and with the appropriate approach. ObjectiveThe decisions on when and what to automate is important since wrong decisions can lead to disappointments and major wrong expenditures (resources and efforts). To support decision making on when and what to automate, researchers and practitioners have proposed various guidelines, heuristics and factors since the early days of test automation technologies. As the number of such sources has increased, it is important to systematically categorize the current state-of-the-art and -practice, and to provide a synthesized overview. MethodTo achieve the above objective, we have performed a Multivocal Literature Review (MLR) study on when and what to automate in software testing. A MLR is a form of a Systematic Literature Review (SLR) which includes the grey literature (e.g., blog posts and white papers) in addition to the published (formal) literature (e.g., journal and conference papers). We searched the academic literature using the Google Scholar and the grey literature using the regular Google search engine. ResultsOur MLR and its results are based on 78 sources, 52 of which were grey literature and 26 were formally published sources. We used the qualitative analysis (coding) to classify the factors affecting the when- and what-to-automate questions to five groups: (1) Software Under Test (SUT)-related factors, (2) test-related factors, (3) test-tool-related factors, (4) human and organizational factors, and (5) cross-cutting and other factors. The most frequent individual factors were: need for regression testing (44 sources), economic factors (43), and maturity of SUT (39). ConclusionWe show that current decision-support in software test automation provides reasonable advice for industry, and as a practical outcome of this research we have summarized it as a checklist that can be used by practitioners. However, we recommend developing systematic empirically-validated decision-support approaches as the existing advice is often unsystematic and based on weak empirical evidence.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.