Abstract

The Teaching-Learning-Based Optimization(TLBO) algorithm does not require special parameters setting for working the algorithm, but there are some shortcomings such as slow convergence speed and long running time. Therefore, some improvements have been done on the TLBO algorithm in the paper. Firstly, the population initialization of the TLBO algorithm is random, which does not ensure the uniform distribution of initial solutions in the solution space, and then it will affect the algorithm's efficiency to some extent. Therefore, the paper proposes opposing-based learning to initialize and renewal the population of the TLBO algorithm. Secondly, to efficiently speed up the convergence speed of the algorithm, a linear decreasing inertia weight (DIW) strategy and two nonlinear DIW strategies (a parabola opening upwards and a parabola opening downwards curve) are combined with the TLBO respectively. Finally, improved TLBO algorithms have been evaluated by 13 benchmark functions. The experimental results show that improved TLBO algorithms have much better optimization performances than the TLBO algorithm on most benchmark functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call