Articles published on Combinatorial testing
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
315 Search results
Sort by Recency
- Research Article
- 10.1038/s41593-025-02118-7
- Nov 24, 2025
- Nature neuroscience
- Christopher R Bye + 29 more
Heterogeneous and predominantly sporadic neurodegenerative diseases, such as amyotrophic lateral sclerosis (ALS), remain highly challenging to model. Patient-derived induced pluripotent stem cell (iPSC) technologies offer great promise for these diseases; however, large-scale studies demonstrating accelerated neurodegeneration in patients with sporadic disease are limited. Here we generated an iPSC library from 100 patients with sporadic ALS (SALS) and conducted population-wide phenotypic screening. Motor neurons derived from patients with SALS recapitulated key aspects of the disease, including reduced survival, accelerated neurite degeneration correlating with donor survival, transcriptional dysregulation and pharmacological rescue by riluzole. Screening of drugs previously tested in ALS clinical trials revealed that 97% failed to mitigate neurodegeneration, reflecting trial outcomes and validating the SALS model. Combinatorial testing of effective drugs identified baricitinib, memantine and riluzole as a promising therapeutic combination for SALS. These findings demonstrate that patient-derived iPSC models can recapitulate sporadic disease features, paving the way for a new generation of disease modeling and therapeutic discovery in ALS.
- Research Article
- 10.1142/s021819402550072x
- Oct 8, 2025
- International Journal of Software Engineering and Knowledge Engineering
- Heng Xu + 4 more
Combinatorial Test Sequence Generation Method Integrated with STPA
- Research Article
- 10.1142/s0219649225501060
- Oct 6, 2025
- Journal of Information & Knowledge Management
- Ramgouda B Patil
Software testing is a crucial and important part of the life cycle of software development, and it contributes to nearly one-third of the overall cost associated with system development. In the software industry, about 30-40% of the overall cost of a software project is used for testing. Thus, an efficient way to provide test software is crucial for reducing time, effort and cost. Moreover, Combinatorial Testing is employed as an efficient technique for test case generation. Nevertheless, widespread testing of extremely configurable software is unfeasible because of its limited resources and time. Therefore, a new technique called Coati Beetle Optimisation (CBO) is employed for effective test case generation using combinatorial testing. For that, first, the input model to generate the test case is attained from the database, and then all combinations of test cases are generated. Following this, in order to generate optimum test cases, combinatorial testing is performed by utilising the proposed CBO technique. Here, the CBO approach is formulated by the incorporation of Dung Beetle Optimisation (DBO) and Coati Optimisation Algorithm (COA). The newly developed CBO technique for generating effective test cases measured a minimum test suite size and fitness of 110 and 0.038, respectively.
- Research Article
- 10.3390/s25185764
- Sep 16, 2025
- Sensors (Basel, Switzerland)
- Haitao Min + 5 more
Scenario-based testing is a mainstream approach for evaluating the safety of automated driving systems (ADS). However, logical scenarios are defined through parameter spaces, and performance differences among systems under test make it difficult to ensure fairness and coverage using the same concrete parameters. Accordingly, an automated driving system testing method is proposed. Guided by the established full-coverage testing framework, a quantitative evaluation method for scenario representativeness is first proposed by jointly analyzing naturalistic driving probability distributions and hazard-related characteristics. Furthermore, a hybrid algorithm integrating heat-guided hierarchical search and genetic optimization is developed to address the non-uniform full-coverage problem, enabling efficient selection of representative parameters that ensure complete coverage of the logical scenario space. The proposed method is validated through empirical studies in representative use cases, including lead vehicle braking and cut-in scenarios. Experimental results show that the proposed method achieves 100% coverage of the logical scenario parameter space with an 8% boundary fitting error, outperforming mainstream baselines including monte carlo (84.3%, 19%), combinatorial testing (86.5%, 14%) and importance sampling (72.0%, 7%). The approach achieves exhaustive coverage of the logical scenario space with limited concrete scenarios, and effectively supports the development of consistent, reproducible and efficient scenario generation frameworks for testing organizations.
- Research Article
- 10.47772/ijriss.2025.909000665
- Sep 1, 2025
- International Journal of Research and Innovation in Social Science
- Maslita Abd Aziz + 2 more
Combinatorial Testing for Identifying Defect Patterns in Manufacturing
- Research Article
- 10.21015/vtse.v13i3.2125
- Sep 1, 2025
- VFAST Transactions on Software Engineering
- Alam Zeb + 3 more
This systematic literature review (SLR) investigates search-based strategies for generating combinatorial test suites using covering arrays (CAs) to efficiently test system interactions. Conducted following PRISMA guidelines, the review analyzes 91 primary studies published between 2003 and 2025, selected through a rigorous process from major academic databases. The identified strategies are categorized into five types: standard, mix, adaptive, hybrid, and hyper-heuristic, based on their underlying algorithmic approaches, including swarm intelligence, evolutionary algorithms, and hyper-heuristic techniques. Each strategy is examined in depth, evaluating its effectiveness in generating high-quality combinatorial test suites. The review also highlights challenges in applying these strategies to varying software testing scenarios. Based on the findings, it provides practical insights to enhance their application and effectiveness in real-world contexts. This work supports broader adoption of search-based testing to improve software quality and reduce defect rates.
- Research Article
- 10.1080/0954898x.2025.2517130
- Jul 12, 2025
- Network: Computation in Neural Systems
- Selvakumar J + 2 more
ABSTRACT In software development, software testing is very crucial for developing good quality software, where the effectiveness of software is to be tested. For software testing, test suites and test cases need to be prepared in minimum execution time with the test case prioritization (TCP) problems. Generally, some of the researchers mainly focus on the constraint problems, such as time and fault on TCP. In this research, the novel Fractional Hybrid Leader Based Optimization (FHLO) is introduced with constraint handling for combinatorial TCP. To detect faults earlier, the TCP is an important technique as it reduces the regression testing cost and prioritizes the test case execution. Based on the detected fault and branch coverage, the priority of the test case for program execution is decided. Furthermore, the FHLO algorithm establishes the TCP for detecting the program fault, which prioritizes the test case, and relies on maximum values of Average Percentage of Branch Coverage (APBC) and Average Percentage of Fault Detected (APFD). From the analysis, the devised FHLO algorithm attains a maximum value of 0.966 for APFD and 0.888 for APBC.
- Research Article
- 10.1145/3728964
- Jun 22, 2025
- Proceedings of the ACM on Software Engineering
- Lixin Xu + 6 more
REST APIs are essential for building modern enterprise systems, but effectively testing them remains challenging, particularly due to difficulties in inferring constraints from specifications. Current testing approaches typically use feedback from HTTP status codes to guide input generation. However, they overlook valuable information available in the accompanying error messages, reducing their effectiveness in exploring the APIs’ input spaces. In this paper, we propose EmRest, a black-box testing approach that leverages error message analysis to enhance both valid and exceptional test input generation for REST APIs. For each operation under test, EmRest first identifies all possible value assignment strategies for each of its input parameters. It then repeatedly applies combinatorial testing to sample test inputs based on these strategies, and statistically analyzes the error messages (of 400-range status code) received to infer and exclude invalid combinations of value assignment strategies (i.e., constraints of the input space). Additionally, EmRest seeks to mutate valid value assignment strategies that are finally identified to generate test inputs for exceptional testing. The error messages (of 500-range status code) received are categorized to identify bug-prone operations, for which more testing resources are allocated. Our experimental results on 16 real-world REST APIs demonstrates the effectiveness of EmRest. It achieves higher operation coverage than state-of-the-art approaches in 50% of APIs, and detects 226 unique bugs undetected by other approaches.
- Research Article
- 10.1007/s11227-025-07459-5
- Jun 9, 2025
- The Journal of Supercomputing
- Yunlong Sheng + 4 more
A combinatorial test case prioritization method based on the quadratic network with wide multi-layer kernels
- Research Article
3
- 10.17749/2070-4909/farmakoekonomika.2024.263
- May 2, 2025
- FARMAKOEKONOMIKA. Modern Pharmacoeconomics and Pharmacoepidemiology
- I Yu Torshin + 1 more
Background. Phenol and parabens exert bactericidal properties, are relatively low-toxic (in acute toxicity tests) and are used in pharmaceutical, cosmetic and food industries as stabilizers/preservatives for the final product. Despite their widespread use, the long-term toxicological effects of phenol and parabens remain largely unexplored.Objective: To conduct an analysis of the results of basic and clinical studies on chronic toxicity of phenol and parabens.Material and methods. The study included 544 articles found using the query “Preservatives, Pharmaceutical [MeSH Terms] AND Phenol [MeSH Terms]” in the PubMed/MEDLINE biomedical publications database. Methods of topological and metric analysis of big data were applied, developed in the scientific school of Academician of the Russian Academy of Sciences Yu.I. Zhuravlev. Keywords were sorted by empirical Rudakov–Torshin informativeness functionals in the context of combinatorial theory of solvability, followed by combinatorial testing of solvability to find terms with the greatest informativeness.Results. Despite the existence of individual studies on the acute toxicity of phenol and its derivatives (including parabens), the chronic toxicity of phenol and parabens remains poorly understood. This fact is indicated not only by a lack of carefully performed research, but also by the information in safety data sheets supplied by manufacturers of the relevant substances. The associations of phenol and paraben blood levels with certain chronic pathologies in humans have been insufficiently studied. At the same time, the authors of fundamental research, if not “sound the alarm,” then strongly underline the need to conduct large-scale clinical trials on the long-term toxic effects of phenol and parabens. Firstly, this is due to complex estrogen-like effect of phenol and parabens, including (1) effects on estrogen sulfotransferases, (2) direct interactions with estrogen receptors, (3) influence on the expression of steroid receptor genes. Secondly, the available data from fundamental research indicate that phenol/parabens obviously stimulate the the molecular mechanisms of oncogenesis pathophysiology (systematic disturbances in gene expression and corresponding changes in the structure of organ tissues). Thirdly, teratogenic and other toxic effects on the embryo and pregnancy were demonstrated not only in experimental studies (neurotoxicity and teratogenesis in animal models), but also in clinical observations (metabolic disorders in a pregnant woman, including the metabolism of purines and fatty acids beta-oxidation, hyperactivity and/or excess body weight in children, asthma, thyroid dysfunction, etc.).Conclusion. Findings from basic research and selected clinical studies dictate an urgent need to examine the association of phenol/paraben blood levels with chronic pathologies in large-scale clinical trials with cross-sectional and longitudinal design. The lack of indication on toxic effects of parabens and phenols in certain clinical studies may just be an artifact of incorrect data analysis.
- Research Article
- 10.1016/j.eswa.2025.126634
- May 1, 2025
- Expert Systems with Applications
- Kamaraj Kanagaraj + 3 more
Combinatorial test case prioritization using hybrid Energy Valley Dwarf Mongoose Optimization approach
- Research Article
- 10.52326/jes.utm.2025.32(1).04
- Apr 25, 2025
- JOURNAL OF ENGINEERING SCIENCE
- Petru Cervac + 1 more
This paper introduced a novel storage format for covering arrays, designed to optimize file size through efficient compression techniques. The proposed format employed Asymmetric Numeral System (ANS) encoding for array data, as well as Run-Length Encoding (RLE) and Variable Length Encoding (VLE) for metadata storage. The goal was to provide a compact, standardized format that facilitates easier sharing and utilization of covering arrays across different applications. Experimental evaluations on a dataset of 21964 covering arrays from the National Institute of Standards and Technology (NIST) demonstrated that the new format outperforms general-purpose compression algorithms such as ZIP, BZIP2, and XZ in most cases, particularly for larger covering arrays with high parameter counts. While previous work on covering array storage focused on archival and retrieval efficiency, the proposed method significantly reduces storage requirements without loss of structural integrity. The proposed method preserved the combinatorial properties of covering arrays while reducing redundancy, making it a practical alternative for large-scale combinatorial testing applications.
- Research Article
- 10.1007/s42979-025-03937-y
- Apr 15, 2025
- SN Computer Science
- Andrea Bombarda + 1 more
Combinatorial Interaction Testing is a widely used method for testing intricate systems. In most cases, the test suites are generated from scratch. However, there are cases when testers may want to reuse existing tests, in order to include them in a new test suite, both for enhancing the performance of the generation process or because those tests are valuable for checking the functioning of the system under test in critical conditions. In this paper, we propose a general framework for dealing with existing test suites using combinatorial test generators. We also discuss the definition of partial tests and test suites, and the scenarios in which partial tests should or could be reused. Finally, we compare the most common tools for completing test suites, namely ACTS, PICT, and pMEDICI+, using different incompleteness levels in the seeds. ACTS with seeds generally performed the best in terms of test suite size and generation time. The other two tools, namely PICT and pMEDICI+, were slower and produced larger test suites on average. We have found that using seeds could sometimes come with a cost, especially in the scenario where test cases are partial and completing them is not always cost-effective in terms of generation time. The choice of re-using or throwing away existing tests must be based on use case-specific requirements. We do not recommend using seeds when they are composed of partial test cases, providing that they are not required for some other reason. On the contrary, we envision the use of partial test suites when a test suite with higher strength is needed.
- Research Article
- 10.52783/jisem.v10i30s.4889
- Mar 31, 2025
- Journal of Information Systems Engineering and Management
- Mahipal Chakravarthy G
The successful eminence of Combinatorial Testing can be realized with effective techniques for Fault Localization (FL). This paper presents an effective FL technique that can accurately determine the Failure Inducing Combinations (FICs) using a greedy heuristic approach. The importance of this technique comes from the fact that when an executed test suite is given as input, it localizes the FICs by catering to all-t-way combinations while generating Test Cases based on their probability of passing. The initial study and experimentation are found to be promising, providing scope for rigorous empirical study.
- Research Article
- 10.3390/pr13030845
- Mar 13, 2025
- Processes
- Shumin Song + 7 more
The high damage rate of mechanical cutting and low harvesting efficiency of stem mustard is a major constraint to the sustainable development of its industry. In this study, a reciprocating cutter device tailored for stem mustard was designed for stem mustard under special growing conditions in southwest China. A reciprocating cutter model was developed based on ANSYS/LS-DYNA. Parameters considered include cutting height (X1), angle of incision (X2), forward speed (X3) and single run displacement (X4). Cutting force (F) and cutting power (P) were identified as evaluation metrics. A multifactor quadratic regression model was developed for the orthogonal combinatorial testing procedure using the Box–Behnken design methodology. Cutting force and cutting power obtained by applied derivation of regression equations were 41.4 N and 36.756 W, respectively. Response surface methodology and analysis of variance (ANOVA) were used to determine the optimum operating parameters of the cutting tools used for machining, which were determined to be X1 = 1.45 mm, X2 = 12°, X3 = 0.5 m/s and X4 = 93 mm. The maximum cutting success rate of 94% and the minimum damage rate of 6% on stemmed mustard under the optimum combination of cutting parameters were verified through several field trials. The results of this study provide valuable technical insights into the optimal design of harvesting equipment for stem and leaf mustard to improve the success rate and reduce the damage rate.
- Research Article
- 10.3390/jmse13020338
- Feb 12, 2025
- Journal of Marine Science and Engineering
- Lijia Chen + 6 more
Collision avoidance algorithms play a crucial role in ensuring the safety and effectiveness of autonomous ships, which require comprehensive testing in realistic multi-ship encounter scenarios. However, existing scenario generation methods often inadequately represent the spatiotemporal complexity and dynamic risk interactions of real-world encounters, leading to biased evaluations. To bridge this gap, this paper proposes a combinatorial-testing-based scenario generation framework integrated with spatiotemporal complexity optimisation. First, a full-process scenario representation model is developed by abstracting real-world navigation features into a discretised parameter space. Subsequently, a combinatorial-testing-based scenario generation method is adopted to cover the parameter space, generating a high-coverage scenario set. Finally, spatiotemporal complexity is introduced to filter out oversimplified scenarios and extremely dangerous scenarios. Experiments demonstrated that 13.7% of generated scenarios were eliminated as unrealistic or trivial, while high-risk encounter scenarios and multi-ship interaction scenarios were amplified by 7.96 times and 5.84 times, respectively. Compared to conventional methods, the optimised scenario set exhibited superior alignment with real-world complexity, including dynamic risk escalation and multi-ship coordination challenges. The proposed framework not only advances scenario generation methodology through its integration of combinatorial testing and complexity-driven optimisation, but also provides a practical tool for rigorously validating autonomous ship safety systems.
- Research Article
- 10.52783/jisem.v10i5s.628
- Jan 24, 2025
- Journal of Information Systems Engineering and Management
- Rekha Jayaram
In complex software systems, identification of parameter combinations that lead to failures is important for effective debugging and robust software quality assurance. Combinatorial testing (CT) can greatly reduce the number of test cases (TC) used for testing a complex system by generating TCs based on various parameter combinations, while still maintaining high fault-detection capabilities. In this paper, a rule-based approach is presented that will aid in identifying Failure Inducing Combinations (FIC) of parameters that caused a fault in CT generated TCs. The approach uses heuristics to identify rules to methodically split parameter combinations into more likely or less probable combinations causing failure, thus reducing the set of failure sources. This approach is tested on two different case studies i) Three factor authentication system and ii) Existing literature-based input. The results obtained show significant accuracy and time saving for program debugging, thereby indicating applicability of the approach to real-world problems. The approach was found to be successful in identifying the pair-wise combinations of parameters that are likely to cause failure.
- Research Article
- 10.1038/s41598-024-82455-y
- Jan 2, 2025
- Scientific Reports
- Yanpeng Zhang + 1 more
When the combinatorial testing method is used to locate faults in the complex signalling system of high-speed rail in order to prevent the system from being affected by combinatorial testing case explosion, which could results from the masking effects caused by multiple faults, the Minimum Fault Schema (MFS) can be accurately and efficiently located. Taking the Automatic Train Operation (ATO) scenario in intelligent high-speed rail as an example, a fault localization method based on the Adaptive Error Locating Array (AELA) algorithm is proposed. To begin with, according to the characteristics of ATO, the adaptive fault localization model is designed and the test parameter table is constructed. Then the Partial Variable Intensity Covering Array (PVICA) algorithm is used to generate the initial set of test cases, and the cases are executed sequentially. Furthermore, based on the test case execution results, the fault location module is invoked to prioritize the generation of additional test cases. These cases are designed to locate the MFS more easily in the given parameter range using the Adaptive Particle Swarm Optimization (APSO) algorithm. Finally, the MFS is determined. The validity and accuracy of the proposed method are verified through the simulation testing platform for Beijing-Shenyang high-speed rail. The experimental results of ablation and comparison show that the Integrity, average Accuracy and average C-Evaluation of the proposed algorithm can reach up to 100%, 91.07% and 84.56% respectively. Compared to four mainstream adaptive algorithms of fault localization, the proposed algorithm is less affected by the masking effects caused by multiple faults and requires the fewest number of test cases. The research results can offer useful guidance for verifying the integrity and reliability of the ATO function and contribute to the intrinsic safety for rail transit.
- Research Article
- 10.14569/ijacsa.2025.0160774
- Jan 1, 2025
- International Journal of Advanced Computer Science and Applications
- Muhamad Asyraf Anuar + 3 more
Integration of Grey Wolf Optimizer Algorithm with Combinatorial Testing for Test Suite Generation
- Research Article
- 10.3390/math13010097
- Dec 29, 2024
- Mathematics
- Elod P Csirmaz + 1 more
Enumerating the extremal submodular functions defined on subsets of a fixed base set has only been done for base sets up to five elements. This paper reports the results of attempting to generate all such functions on a six-element base set. Using improved tools from polyhedral geometry, we have computed 360 billion of them, and provide the first reasonable estimate of their total number, which is expected to be between 1000 and 10,000 times this number. The applied Double Description and Adjacency Decomposition methods require an insertion order of the defining inequalities. We introduce two novel orders, which speed up the computations significantly, and provide additional insight into the highly symmetric structure of submodular functions. We also present an improvement to the combinatorial test used as part of the Double Description method, and use statistical analyses to estimate the degeneracy of the polyhedral cone used to describe these functions. The statistical results also highlight the limitations of the applied methods.