Abstract
This article presents a competitive learning-based Grey Wolf Optimizer (Clb-GWO) formulated through the introduction of competitive learning strategies to achieve a better trade-off between exploration and exploitation while promoting population diversity through the design of difference vectors. The proposed method integrates population sub-division into majority groups and minority groups with a dual search system arranged in a selective complementary manner. The proposed Clb-GWO is tested and validated through the recent CEC2020 and CEC2019 benchmarking suites followed by the optimal training of multi-layer perceptron's (MLPs) with five classification datasets and three function approximation datasets. Clb-GWO is compared against the standard version of GWO, five of its latest variants and two modern meta-heuristics. The benchmarking results and the MLP training results demonstrate the robustness of Clb-GWO. The proposed method performed competitively compared to all its competitors with statistically significant performance for the benchmarking tests. The performance of Clb-GWO the classification datasets and the function approximation datasets was excellent with lower error rates and least standard deviation rates.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.