Abstract
Determining the best set of weights and biases for training neural networks (NN) using gradient descent techniques is a computationally challenging task. On the other hand, training of gradient descent algorithms suffers from being trapped in local optima and slow convergence speed in the last iterations. The moth-flame optimization (MFO) is a novel evolutionary method based on navigation paths of moths in nature. This algorithm showed its effectiveness in many real-world optimization problems. In this chapter, MFO is employed for training multilayer perceptron (MLP) to overcome the problems of gradient descent algorithms. This algorithm also investigates the application of the MFO for tackling the navigation of autonomous mobile robots. The results are compared with four powerful evolutionary algorithms including gray wolf optimizer (GWO), cuckoo search (CS), multiverse optimizer algorithm (MVO), and particle swarm optimization (PSO). Moreover, the results are compared to two gradient-based training MLP algorithms including Levenberg–Marquardt (LM) and back-propagation (BP). The evaluation metrics used in this book chapter are accuracy and area under the curve (AUC). The experimental results show that MFO-based MLP algorithm outperforms other algorithms and showed its capabilities effectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.