Abstract

Political optimizer (PO) is a relatively state-of-the-art meta-heuristic optimization technique for global optimization problems, as well as real-world engineering optimization, which mimics the multi-staged process of politics in human society. However, due to a greedy strategy during the election phase, and an inappropriate balance of global exploration and local exploitation during the party switching stage, it suffers from stagnation in local optima with a low convergence accuracy. To overcome such drawbacks, a sequence of novel PO variants were proposed by integrating PO with Quadratic Interpolation, Advance Quadratic Interpolation, Cubic Interpolation, Lagrange Interpolation, Newton Interpolation, and Refraction Learning (RL). The main contributions of this work are listed as follows. (1) The interpolation strategy was adopted to help the current global optima jump out of local optima. (2) Specifically, RL was integrated into PO to improve the diversity of the population. (3) To improve the ability of balancing exploration and exploitation during the party switching stage, a logistic model was proposed to maintain a good balance. To the best of our knowledge, PO combined with the interpolation strategy and RL was proposed here for the first time. The performance of the best PO variant was evaluated by 19 widely used benchmark functions and 30 test functions from the IEEE CEC 2014. Experimental results revealed the superior performance of the proposed algorithm in terms of exploration capacity.

Highlights

  • Global optimization problems (GOPs) are inevitable in applied mathematics and practical engineering fields

  • To evaluate the impacts of ξ and p on the performance of CRLPO, additional experiments needed to be done. Let both ξ and p be larger than 1000. In such case, the performance of CRLPO was almost

  • Inspired by the advantages of interpolation strategy and the phenomenon of refraction in nature, a sequence of novel Political optimizer (PO) variants was suggested and the best variant with interpolation strategy and Refraction Learning (RL) strategy was proposed to form a hybrid with the original PO

Read more

Summary

Introduction

Global optimization problems (GOPs) are inevitable in applied mathematics and practical engineering fields. Most GOPs can be formulated as follows: min f ðxÞ; x 1⁄4 ðx; x2; . ; xnÞ ð1Þ where f(x) and n denote an objective function and the number of variables, respectively. R is the real field, x2Q and Q is an n-dimensional rectangle in Rn defined by the following equation:. . .,n and [l, u] is the feasible region.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call