A Systematic Literature Review on Graphical User Interface Testing Through Software Patterns
Context: Graphical user interface (GUI) testing of mobile applications (apps) is significant from a user perspective to ensure that the apps are visually appealing and user‐friendly. Pattern‐based GUI testing (PBGT) is an innovative model‐based testing (MBT) approach designed to enhance user satisfaction and reusability while minimizing the effort required to model and test UIs of mobile apps. In the literature, several primary studies have been conducted in the domain of PBGT.Problem: The current state‐of‐the‐art lacks comprehensive secondary studies within the PBGT domain. To our knowledge, this area has insufficient focus on in‐depth research. Consequently, numerous challenges and limitations persist in the existing literature.Objective: This study aims to fill the gaps mentioned above in the existing body of knowledge. We highlight popular research topics and analyze their relationships. We explore current state‐of‐the‐art approaches and techniques, a taxonomy of tools and modeling languages, a list of reported UI test patterns (UITPs), and a taxonomy of writing UITPs. We also highlight practical challenges, limitations, and gaps in the targeted research area. Furthermore, the current study intends to highlight future research directions in this domain.Method: We conducted a systematic literature review (SLR) on PBGT in the context of Android and web apps. A hybrid methodology that combines the Kitchenham and PRISMA guidelines is adopted to achieve the targeted research objectives (ROs). We perform a keyword‐based search on well‐known databases and select 30 (out of 557) studies.Results: The current study identifies 11 tools used in PBGT and devises a taxonomy to categorize these tools. A taxonomy for writing UITPs has also been developed. In addition, we outline the limitations of the targeted research domain and future directions.Conclusion: This study benefits the community and readers by better understanding the targeted research area. A comprehensive knowledge of existing tools, techniques, and methodologies is helpful for practitioners. Moreover, the identified limitations, gaps, emerging trends, and future research directions will benefit researchers who intend to work further in future research.
- Research Article
25
- 10.1109/tr.2018.2869227
- Mar 1, 2019
- IEEE Transactions on Reliability
Android applications do not seem to be tested as thoroughly as desktop ones. In particular, graphical user interface (GUI) testing appears generally limited. Like web-based applications, mobile apps suffer from GUI test fragility, i.e., GUI test classes failing or needing updates due to even minor modifications in the GUI or in the application under test. The objective of our study is to estimate the adoption of GUI testing frameworks among Android open-source applications, the quantity of modifications needed to keep test classes up to date, and their amount due to GUI test fragility. We introduce a set of 21 metrics to measure the adoption of testing tools and the evolution of test classes and test methods, and to estimate the fragility of test suites. We computed our metrics for six GUI testing frameworks, none of which achieved a significant adoption among Android projects hosted on GitHub. When present, GUI test methods associated with the considered tools are modified often, and a relevant portion (70% on average) of those modifications is induced by GUI-related fragilities. On average, for the projects considered, more than 7% of the total modified lines of code between consecutive releases belong to test classes developed with the analyzed testing frameworks. The measured percentage was higher on average than the one required by other generic test code, based on the JUnit testing framework. Fragility of GUI tests constitutes a relevant concern, probably an obstacle for developers to adopt test automation. This first evaluation of the fragility of Android scripted GUI testing can constitute a benchmark for developers and testers leveraging the analyzed test tools and the basis for the definition of a taxonomy of fragility causes and guidelines to mitigate the issue.
- Research Article
8
- 10.1002/smr.1963
- Jun 19, 2018
- Journal of Software: Evolution and Process
Recently, testing mobile applications is gaining much attention due to the widespread of smartphones and the tremendous number of mobile applications development. It is essential to test mobile applications before being released for the public use. Graphical user interface (GUI) testing is a type of mobile applications testing conducted to ensure the proper functionality of the GUI components. Typically, GUI testing requires a lot of effort and time whether manual or automatic. Cloud computing is an emerging technology that can be used in the software engineering field to overcome the defects of the traditional testing approaches by using cloud computing resources. As a result, testing‐as‐a‐service is introduced as a service model that conducts all testing activities in a fully automated manner. In this paper, a system for mobile applications GUI testing based on testing‐as‐a‐service architecture is proposed. The proposed system performs all testing activities including automatic test case generation and simultaneous test execution on multiple virtual nodes for testing Android‐based applications. The proposed system reduces testing time and meets fast time‐to market constraint of mobile applications. Moreover, the proposed system architecture addresses many issues such as maximizing resource utilization, continuous monitoring to ensure system reliability, and applying fault‐tolerance approach to handle occurrence of any failure.
- Conference Article
3
- 10.1109/icts.2016.7910301
- Jan 1, 2016
Graphical User Interface (GUI) testing which is done manually requires great effort, because it needs high precision and bunch of time to do the all scenarios repeatedly. In addition, it can be prone to errors and most of testing scenarios are not all done. To solve that problems, it is proposed automated GUI testing. The latest techniques of automated GUI testing (the 3rd generation) is through a visual approach or called by Visual GUI testing (VGT). To automate the VGT, it is necessary to use testing tools. With VGT tools, GUI testing can be performed automatically and can mimic the human behavior. However, in the software development process, VGT feedback is still not automated, so that the effort is still required to run the VGT manually and repeatedly. Continuous integration (CI) is a practice that can automate the build when any program code or any version of the program code is changed. Each build consists of compile, inspection program code, test, and deploy. To automate the VGT feedback, it proposed combination of CI practice and VGT practice. In this paper, the focus of research is combining and assessing the VGT tools and CI tools, because there is no research about it yet. The result of this research show that combination of Jenkins and JAutomate are the highest assessment.
- Research Article
- 10.1002/smr.2721
- Aug 5, 2024
- Journal of Software: Evolution and Process
Demonstrating software early and responding to feedback is crucial in agile development. However, it is difficult for stakeholders who are not on‐site customers but end users, marketing people, or designers, and so forth to give feedback in an agile development environment. Successful graphical user interface (GUI) test executions can be documented and then demonstrated for feedback. In our new concept, GUI tests from behavior‐driven development (BDD) are recorded, augmented, and demonstrated as videos. A GUI test is divided into several GUI unit tests, which are specified in Gherkin, a semi‐structured natural language. For each GUI unit test, a video is generated during test execution. Test steps specified in Gherkin are traced and highlighted in the video. Stakeholders review these generated videos and provide feedback, for example, on misunderstandings of requirements or on inconsistencies. To evaluate the impact of videos in identifying inconsistencies, we asked 22 participants to identify inconsistencies between (1) given requirements in regular sentences and (2) demonstrated behaviors from videos with Gherkin specifications or from Gherkin specifications alone. Our results show that participants tend to identify more inconsistencies from demonstrated behaviors, which are not in accordance with given requirements. They tend to recognize inconsistencies more easily through videos than through Gherkin specifications alone. The types of inconsistency are threefold: The mentioned feature can be incorrectly implemented, not implemented, or an unspecified new feature. We use a fictitious example showing how this feedback helps a product owner and her team manage requirements. We conclude that GUI test videos can help stakeholders give feedback more effectively. By obtaining early feedback, inconsistencies can be resolved, thus contributing to higher stakeholder satisfaction.
- Research Article
140
- 10.1016/j.infsof.2013.03.004
- Apr 4, 2013
- Information and Software Technology
Graphical user interface (GUI) testing: Systematic mapping and repository
- Conference Article
143
- 10.1145/2970276.2970313
- Aug 25, 2016
Automated Graphical User Interface (GUI) testing is one of the most widely used techniques to detect faults in mobile applications (apps) and to test functionality and usability. GUI testing exercises behaviors of an application under test (AUT) by executing events on GUIs and checking whether the app behaves correctly. In particular, because Android leads in market share of mobile OS platforms, a lot of research on automated Android GUI testing techniques has been performed. Among various techniques, we focus on model-based Android GUI testing that utilizes a GUI model for systematic test generation and effective debugging support. Since test inputs are generated based on the underlying model, accurate GUI modeling of an AUT is the most crucial factor in order to generate effective test inputs. However, most modern Android apps contain a number of dynamically constructed GUIs that make accurate behavior modeling more challenging. To address this problem, we propose a set of multi-level GUI Comparison Criteria (GUICC) that provides the selection of multiple abstraction levels for GUI model generation. By using multilevel GUICC, we conducted empirical experiments to identify the influence of GUICC on testing effectiveness. Results show that our approach, which performs model-based testing with multi-level GUICC, achieved higher effectiveness than activity-based GUI model generation. We also found that multi-level GUICC can alleviate the inherent state explosion problems of existing a single-level GUICC for behavior modeling of real-world Android apps by flexibly manipulating GUICC.
- Research Article
3
- 10.3390/app131910569
- Sep 22, 2023
- Applied Sciences
To deliver user-friendly experiences, modern software applications rely heavily on graphical user interfaces (GUIs). However, it is paramount to ensure the quality of these GUIs through effective testing. This paper proposes a novel “Finite state testing for GUI with test case prioritization using ZScore-Bald Eagle Search (Z-BES) and Gini Kernel-Gated recurrent unit (GK-GRU)” approach to enhance GUI testing accuracy and efficiency. First, historical project data is collected. Subsequently, by utilizing the Z-BES algorithm, test cases are prioritized, aiding in improving GUI testing. Attributes are then extracted from prioritized test cases, which contain crucial details. Additionally, a state transition diagram (STD) is generated to visualize system behavior. The state activity score (SAS) is then computed to quantify state importance using reinforcement learning (RL). Next, GUI components are identified, and their text values are extracted. Similarity scores between GUI text values and test case attributes are computed. Grounded on similarity scores and SAS, a fuzzy algorithm labels the test cases. Data representation is enhanced by word embedding using GS-BERT. Finally, the test case outcomes are predicted by the GK-GRU, validating the GUI performance. The proposed work attains 98% accuracy, precision, recall, f-measure, and sensitivity, and low FPR and FNR error rates of 14.2 and 7.5, demonstrating the reliability of the model. The proposed Z-BES requires only 5587 ms to prioritize the test cases, retaining less time complexity. Meanwhile, the GK-GRU technique requires 38945 ms to train the neurons, thus enhancing the computational efficiency of the system. In conclusion, experimental outcomes demonstrate that, compared with the prevailing approaches, the proposed technique attains superior performance.
- Book Chapter
- 10.4018/978-1-60566-060-8.ch177
- Jan 1, 2009
Software testing in general and graphical user interface (GUI) testing in particular is one of the major challenges in the lifecycle of any software system. GUI testing is inherently more difficult than the traditional and command-line interface testing. Some of the factors that make GUI testing different from the traditional software testing and significantly more difficult are: a large number of objects, different look and feel of objects, many parameters associated with each object, progressive disclosure, complex inputs from multiple sources, and graphical outputs. The existing testing techniques for the creation and management of test suites need to be adapted/enhanced for GUIs, and new testing techniques are desired to make the creation and management of test suites more efficient and effective. In this article, a methodology is proposed to create test suites for a GUI. The proposed methodology organizes the testing activity into various levels. The tests created at a particular level can be reused at higher levels. This methodology extends the notion of modularity and reusability to the testing phase. The organization and management of the created test suites resembles closely to the structure of the GUI under test.
- Book Chapter
- 10.4018/978-1-60566-719-5.ch009
- Jan 1, 2010
Software testing in general and graphical user interface (GUI) testing in particular is one of the major challenges in the lifecycle of any software system. GUI testing is inherently more difficult than the traditional and command-line interface testing. Some of the factors that make GUI testing different from the traditional software testing and significantly more difficult are: a large number of objects, different look and feel of objects, many parameters associated with each object, progressive disclosure, complex inputs from multiple sources, and graphical outputs. The existing testing techniques for the creation and management of test suites need to be adapted/enhanced for GUIs, and new testing techniques are desired to make the creation and management of test suites more efficient and effective. In this article, a methodology is proposed to create test suites for a GUI. The proposed methodology organizes the testing activity into various levels. The tests created at a particular level can be reused at higher levels. This methodology extends the notion of modularity and reusability to the testing phase. The organization and management of the created test suites resembles closely to the structure of the GUI under test.
- Research Article
6
- 10.4018/jitwe.2008040101
- Apr 1, 2008
- International Journal of Information Technology and Web Engineering
Software testing in general and graphical user interface (GUI) testing in particular is one of the major challenges in the lifecycle of any software system. GUI testing is inherently more difficult than the traditional and command-line interface testing. Some of the factors that make GUI testing different from the traditional software testing and significantly more difficult are: a large number of objects, different look and feel of objects, many parameters associated with each object, progressive disclosure, complex inputs from multiple sources, and graphical outputs. The existing testing techniques for the creation and management of test suites need to be adapted/enhanced for GUIs, and new testing techniques are desired to make the creation and management of test suites more efficient and effective. In this article, a methodology is proposed to create test suites for a GUI. The proposed methodology organizes the testing activity into various levels. The tests created at a particular level can be reused at higher levels. This methodology extends the notion of modularity and reusability to the testing phase. The organization and management of the created test suites resembles closely to the structure of the GUI under test.
- Book Chapter
- 10.4018/978-1-87828-991-9.ch044
- Jan 1, 2009
Software testing in general and graphical user interface (GUI) testing in particular is one of the major challenges in the lifecycle of any software system. GUI testing is inherently more difficult than the traditional and command-line interface testing. Some of the factors that make GUI testing different from the traditional software testing and significantly more difficult are: a large number of objects, different look and feel of objects, many parameters associated with each object, progressive disclosure, complex inputs from multiple sources, and graphical outputs. The existing testing techniques for the creation and management of test suites need to be adapted/enhanced for GUIs, and new testing techniques are desired to make the creation and management of test suites more efficient and effective. In this article, a methodology is proposed to create test suites for a GUI. The proposed methodology organizes the testing activity into various levels. The tests created at a particular level can be reused at higher levels. This methodology extends the notion of modularity and reusability to the testing phase. The organization and management of the created test suites resembles closely to the structure of the GUI under test.
- Conference Article
2
- 10.1109/is3c.2018.00013
- Dec 1, 2018
Automated GUI (graphical user interface) testing tools have been used to help engineers test whether the software GUI is displayed correctly in different smartphones. However, due to different screen aspect ratios, the ratio of width to height, the same content of a mobile application (app) may have a different layout in different smartphones. As a result, the test oracle generated by traditional methods may not be reused for different smartphones and thus prolong the testing process. In this paper, we present a GUI testing tool, named FLAG (Fully Automatic mobile GUI testing), which aims to make the test oracle reusable without compromising test accuracy. In addition, the whole testing process, including generating test cases, simulating user gestures and verifying results, is automatically performed by FLAG without human interaction. News applications have been selected for our study not only because they are popular, but also because they support most commonly-used user gestures, such as tap, scroll, spread and pinch. In our experiment, we selected five commercial Android phones and one popular news apps to evaluate the effectiveness of the FLAG. Our experiment results show that the FLAG performs better than existing methods and can achieve an average accuracy of 95.20% in determining whether a test has passed or failed.
- Conference Article
15
- 10.1145/3457913.3457931
- Nov 1, 2020
Graphical User Interface (GUI) is unavoidable in modern software apps. It facilitates the interactions between the users and the apps. As shown on the Google play store, some apps with higher downloads often have higher-quality, well-designed and tested GUI. GUI testing has become a necessary step in the app development process, and related research become a hot spot in recent years. However, there isn’t a review about GUI testing of mobile apps, which brings obstacles to new researchers. In this paper, we systematically review publications between 2010 and 2020, to gain an insight into GUI testing for mobile apps. Even though the earliest research was published around 1997 but we believe the considered years are likely to include the advances in the field. Specifically, the paper aims to identify (i) the main objectives of GUI testing, (ii) the approaches applied (iii) the evaluation metrics (iv) the challenges and future research directions. To cover all relevant literature, following a predefined systematic literature review procedure, involving both the automatic and manual search strategies, we found 75 primary studies. Four research questions are proposed to analyze them. We found that functionality is the main objective of GUI testing. Model-based testing is the most common approach. Metrics such as error detection, execution time, and code coverage are often used to evaluate the performance of GUI testing techniques. Finally, we outline some key challenges as well as possible research directions. We believe our work would provide a clue for new researchers as well as more research in GUI testing.
- Conference Article
2
- 10.1109/icstw.2010.19
- Apr 1, 2010
GUI (Graphical User Interface) test cases contain much richer information than the test cases in non-GUI testing. Based on the information, the GUI test profiles can be represented in more forms. In this paper, we study the modeling of the test profiles in GUI testing. Several models of GUI test profiles are proposed. Then we present a methodology of studying the relationship between the test profiles and the fault detection in GUI testing. A control scheme based on this relationship that may be able to improve the efficiency of GUI testing is also proposed.
- Conference Article
11
- 10.1109/ssiri.2008.16
- Jul 1, 2008
GUI (Graphical User Interface) testing plays an important role in ensuring the correctness and reliability of software applications. To perform GUI testing, a test script must be prepared (or generated by tools) so that massive user interactions and verifications can be conducted automatically. Ideally, the actions of a test script should be organized based on the structure of the GUI so that the actions are easier to extend and maintain. Unfortunately, current methodologies and tools fall short in supporting such an organization. This paper proposes an object-based approach, called component abstraction, to model the structure of a GUI. A GUI testing modeling language, GTML, is defined and a systematic approach in applying component abstraction is described. We show that a test script written in GTML is more robust and easier to maintain in comparison to an ordinary test script. In addition, we implement a visual environment for the development of GTML scripts so that testers (developers) do not need to write GTML scripts in plain-text language.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.