Abstract

To solve different kinds of optimization challenges, meta-heuristic algorithms have been extensively used. Population initialization plays a prominent role in meta-heuristic algorithms for the problem of optimization. These algorithms can affect convergence to identify a robust optimum solution. To investigate the effectiveness of diversity, many scholars have a focus on the reliability and quality of meta-heuristic algorithms for enhancement. To initialize the population in the search space, this dissertation proposes three new low discrepancy sequences for population initialization instead of uniform distribution called the WELL sequence, Knuth sequence, and Torus sequence. This paper also introduces a detailed survey of the different initialization methods of PSO and DE based on quasi-random sequence families such as the Sobol sequence, Halton sequence, and uniform random distribution. For well-known benchmark test problems and learning of artificial neural network, the proposed methods for PSO (TO-PSO, KN-PSO, and WE-PSO), BA (BA-TO, BA-WE, and BA-KN), and DE (DE-TO, DE-WE, and DE-KN) have been evaluated. The synthesis of our strategies demonstrates promising success over uniform random numbers using low discrepancy sequences. The experimental findings indicate that the initialization based on low discrepancy sequences is exceptionally stronger than the uniform random number. Furthermore, our work outlines the profound effects on convergence and heterogeneity of the proposed methodology. It is expected that a comparative simulation survey of the low discrepancy sequence would be beneficial for the investigator to analyze the meta-heuristic algorithms in detail.

Highlights

  • This section is divided into three sub-sections, where each sub-section is dedicated to Evolutionary algorithms (EAs) simulation results such as particle swarm optimization (PSO), DE, and bat algorithm (BA), respectively

  • This paper introduces the new WELL sequence, Knuth sequence, and Torus sequence pseudorandom initialization strategies that are used to initialize the population in PSO, BA, and DE algorithms

  • Using the low discrepancy sequence family, the theoretical validation of the suggested methods is assessed on a robust suite of benchmark test functions and artificial neural network learning

Read more

Summary

Introduction

The term ‘optimization’ refers to the best solution for a problem with minimum cost in aspect of memory, time, and resources. Sometimes processing time is fast but it may be using a lot of memory while sometimes the processing speed and memory both work fine but the accuracy may get affected. Optimization targets the best solution of any problem [1]. The solution is considered to be the best solution if it is satisfactory in terms of processing speed, resource utilization, and accuracy of the result [2]. Optimization algorithms are utilized to determine the problems of local and global search. A typical target behind the utilization of these optimization algorithms is to discover the optima for contribution as indicated by known inputs model that describes the problem which is to be solved [3]

Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call