Abstract

Optimization problems are usually solved using heuristic algorithms such as swarm intelligence algorithms owing to their ability of providing near-optimal solutions in a feasible amount of time. An example of optimization problem is the training of artificial neural networks to obtain the most optimal connection weights. Artificial Neural Network (ANN), being the most prominent machine learning algorithm, has a multitude of applications in a myriad of areas. Recently, the use of ANNs has risen exponentially owing to its effective ability of making conclusions based on certain inputs. This ability is primarily achieved during the training phase of the ANN, which is a vital process prior to being able to use the ANN. Gradient descent-based algorithms, which are usually used for the training process, often encounter the problem of local optima, thus being unable to obtain the optimal connection weights of the ANN. Metaheuristic algorithms, including swarm intelligence algorithms, have been found to be a better alternative to train ANNs. The Dragonfly Algorithm (DA) is a swarm intelligence algorithm that has been found to be more effective than multiple swarm intelligence algorithms. However, despite having a good performance, it still suffers from a low exploitation. In this paper, we propose to further improve the performance of DA by using hill climbing as a local search so as to enhance its low exploitation. The optimized DA algorithm is then used for training artificial neural networks which are employed for classification problems. Based on the experimental results, the optimized DA algorithm has a higher effectiveness than the original DA as the ANNs trained by the optimized DA have a lower root mean squared error and a higher classification accuracy than the ones trained by the original DA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call