Abstract

The Extreme Learning Machine (ELM) is a learning algorithm used for training a single hidden layer feed-forward neural network (SLFN). It leads to a better generalization performance and fast learning speed and many more advantageous properties. The first step of an ELM is to assign random values to the input weights and biases. Then it will determine the output weights in a single step using the generalized Moore-Penrose method. The random initialization of weights and biases and a large number of hidden nodes can affect the performance of the ELM. Several optimizers were proposed aiming to increase generalization performance and produce more compact networks. This paper presents a comparative study of ELM optimization in the state of the art using existing algorithms namely Optimally Pruned ELM (OP-ELM), Grey Wolf Optimizer (GWO), Salp Swarm Algorithm (SSA), Bat Algorithm (BA), and Particle Swarm Optimization (PSO) which are swarm intelligence-based ELMs. Also Genetic Pruning Algorithm (GPA) and Enhanced Genetic Algorithm (EGA) which are pruning methods using evolutionary algorithms. Results prove that SSA-ELM improves the accuracy in most benchmark datasets and shows better results than the other comparing methods while OP-ELM gives the most compact model with a high level of accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call