Abstract

This paper presents an improved teaching learning-based whale optimization algorithm (TSWOA) used the simplex method. First of all, the combination of WOA algorithm and teaching learning-based algorithm not only achieves a better balance between exploration and exploitation of WOA, but also makes whales have self-learning ability from the biological background, and greatly enriches the theory of the original WOA algorithm. Secondly, the WOA algorithm adds the simplex method to optimize the current worst unit, averting the agents to search at the boundary, and increasing the convergence accuracy and speed of the algorithm. To evaluate the performance of the improved algorithm, the TSWOA algorithm is employed to train the multi-layer perceptron (MLP) neural network. It is a difficult thing to propose a well-pleasing and valid algorithm to optimize the multi-layer perceptron neural network. Fifteen different data sets were selected from the UCI machine learning knowledge and the statistical results were compared with GOA, GSO, SSO, FPA, GA and WOA, severally. The statistical results display that better performance of TSWOA compared to WOA and several well-established algorithms for training multi-layer perceptron neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call