Abstract

AbstractThis study proposes a new prairie dog optimization algorithm version called EPDO. This new version aims to address the issues of premature convergence and slow convergence that were observed in the original PDO algorithm. To improve performance, several modifications are introduced in EPDO. First, a dynamic opposite learning strategy is employed to increase the diversity of the population and prevent premature convergence. This strategy helps the algorithm avoid falling into local optima and promotes global optimization. Additionally, the Lévy dynamic random walk technique is utilized in EPDO. This modified Lévy flight with random walk reduces the algorithm’s running time for the test function’s ideal value, accelerating its convergence. The proposed approach is evaluated using 33 benchmark problems from CEC 2017 and compared against seven other comparative techniques: GWO, MFO, ALO, WOA, DA, SCA, and RSA. Numerical results demonstrate that EPDO produces good outcomes and performs well in solving benchmark problems. To further validate the results and assess reliability, the authors employ average rank tests, the measurement of alternatives, and ranking according to the compromise solution (MARCOS) method, as well as a convergence report of EPDO and other algorithms. Furthermore, the effectiveness of the EPDO algorithm is demonstrated by applying it to five design problems. The results indicate that EPDO achieves impressive outcomes and proves its capability to address practical issues. The algorithm performs well in solving benchmark and practical design problems, as supported by the numerical results and validation methods used in the study.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call