Abstract

Over the years, opposition-based Learning (OBL) technique has been proven to effectively enhance the convergence of meta-heuristic algorithms. The fact that OBL is able to give alternative candidate solutions in one or more opposite directions ensures good exploration and exploitation of the search space. In the last decade, many OBL techniques have been established in the literature including the Standard-OBL, General-OBL, Quasi Reflection-OBL, Centre-OBL and Optimal-OBL. Although proven useful, much existing adoption of OBL into meta-heuristic algorithms has been based on a single technique. If the search space contains many peaks with potentially many local optima, relying on a single OBL technique may not be sufficiently effective. In fact, if the peaks are close together, relying on a single OBL technique may not be able to prevent entrapment in local optima. Addressing this issue, assembling a sequence of OBL techniques into meta-heuristic algorithm can be useful to enhance the overall search performance. Based on a simple penalized and reward mechanism, the best performing OBL is rewarded to continue its execution in the next cycle, whilst poor performing one will miss cease its current turn. This paper presents a new adaptive approach of integrating more than one OBL techniques into Jaya Algorithm, termed OBL-JA. Unlike other adoptions of OBL which use one type of OBL, OBL-JA uses several OBLs and their selections will be based on each individual performance. Experimental results using the combinatorial testing problems as case study demonstrate that OBL-JA shows very competitive results against the existing works in term of the test suite size. The results also show that OBL-JA performs better than standard Jaya Algorithm in most of the tested cases due to its ability to adapt its behaviour based on the current performance feedback of the search process.

Highlights

  • Optimization relates to the process of finding one or more best solutions that either minimize or maximize the return on investment

  • In this paper, we have proposed a new adaptive strategy for t-way test suite generation based on Jaya Algorithm(JA) and opposition-based learning(OBL) concept, called adaptive Jaya Algorithm based on Opposition-based Learning for generating the test suite (OBL-JA)

  • The opposition-based Learning (OBL)-JA has been obtained by employing the concept of component grafting of different types of OBL operators, such as standard-OBL, General-OBL, Quasi Reflection-OBL, Quasi ReflectionOBL, Centre-OBL and Optimal-OBL, into the standard JA strategy

Read more

Summary

INTRODUCTION

Optimization relates to the process of finding one or more best solutions that either minimize or maximize the return on investment. The effort to address the aforementioned shortcomings is justified through the search for a new strategy that takes the new breed of newly developed meta-heuristics algorithms into account Given such prospects, this paper proposes a new t-way testing strategy based on adaptive Opposition-based Learning Jaya Algorithm called OBL-JA, for t-way test suite generation. Meta-heuristic based t-way strategies use the algorithm as core implantation for generating the test suite. The first category uses a single meta-heuristic algorithm as the search engine for the test case Example of this category includes SA [1], GA [1], [2], ACA [2], PSO [3], HS [4], FPA [6], Whale Optimization Algorithm [45] and CS [5]. The strategy adapts OBL operator to enhance its search capabilities

PROPOSED STRATEGY
EXPERIMENTS AND DISCUSSION
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call