Abstract

Global optimization has been employed in many practical modeling processes. Using gradient methods to solve optimization problems may be computationally inefficient and time-consuming, particularly when convexity or differentiability is not guaranteed. On the other hand, nature-inspired techniques offer an effective gradient-free approach for solving complex, non-convex, or non-differentiable problems. Genetic algorithms are one of the most effective and widely used nature-inspired techniques. However, canonical genetic algorithms do not always guarantee convergence to the optimum point owing to the stochastic nature of the genetic operators, and typically require more work to ensure convergence and increase performance. Improving the genetic operators remains an open issue and usually involves a trade-off between the speed of convergence and searchability. In this study, we propose an enhanced genetic algorithm that relies on directional-based crossover and normal mutation operators to increase the speed of convergence while preserving searchability. The proposed algorithm is evaluated using a set of 40 typical benchmark functions in two dimensions. In addition, to examine its performance at higher dimensions, 16 functions from the test set were tested at 10 and 100 dimensions. The evaluation results of the proposed algorithm are compared to the outcomes of three modern optimization algorithms, namely (Whale optimization algorithm, Teacher-Learner based algorithm, and Covariance matrix adaptation evolution strategy). The results revealed that the proposed algorithm outperformed the conventional algorithms at lower dimensions in all test functions and showed a relatively better performance than the other algorithms at higher dimensions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call