Abstract

In this paper, the modified Spider Monkey Optimization (SMO) with Multi-Layer Perceptron (MLP) is utilized to solve the classification problem on five different datasets. The MLP is a widely used Neural Network (NN) variant whichrequires training on specific application to tackle the slow convergence speed and local minima avoidance. The original SMO with MLP experiences the problemof finding the optimal classification result; due to that, the SMO is enhanced by other meta-heuristics algorithm to train the MLP. Based on the concept of nofree lunch theorem, there is always a possibility to improve the algorithm. With the same expectation, the performance of the SMO algorithm is improved byusing Differential Evolution (DE) and Grey Wolf Optimizer (GWO) algorithm to train the MLP. Likewise, the SMO-DE and SMO-GWO are two differentconcepts employed to improve efficiency. The results of proposed algorithms are compared with other well-known algorithms such as BBO, PSO, ES, SVM,KNN, and Logistic Regression. The results show that the proposed algorithm performs better than others or they are more competitive.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.