Abstract

The objective of this article is twofold. On the one hand, we introduce a cognitively inspired hybridization metaheuristic that combines the strengths of two existing metaheuristics: the artificial bee colony (ABC) algorithm and the dragonfly algorithm (DA). The aim of this hybridization is to reduce the problems of slow convergence and trapping into local optima, by striking a good balance between global and local search components of the constituent algorithms. On the other hand, we use the proposed metaheuristic to train a multi-layer perceptron (MLP) as an alternative to existing traditional- and metaheuristic-based learning algorithms; this is for the purpose of improving overall accuracy by optimizing the set of MLP weights and biases. The proposed hybrid ABC/DA (HAD) algorithm comprises three main components: the static and dynamic swarming behavior phase in DA and two global search phases in ABC. The first one performs global search (DA phase), the second one performs local search (onlooker phase), and the third component implements global search (modified scout bee phase). The resultant metaheuristic optimizer is employed to train an MLP to reach a set of weights and biases that can yield high performance compared to traditional learning algorithms or even other metaheuristic optimizers. The proposed algorithm was first evaluated using 33 benchmark functions to test its performance in numerical optimization problems. Later, using HAD for training MLPs was evaluated against six standard classification datasets. In both cases, the performance of HAD was compared with the performance of several new and old metaheuristic methods from swarm intelligence and evolutionary computing. Experimental results show that HAD algorithm is clearly superior to the standard ABC and DA algorithms, as well as to other well-known algorithms, in terms of achieving the best optimal value, convergence speed, avoiding local minima and accuracy of trained MLPs. The proposed algorithm is a promising metaheuristic technique for general numerical optimization and for training MLPs. Specific applications and use cases are yet to be explored fully but they are supported by the encouraging results in this study.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.