Abstract

Extreme Learning Machine (ELM) is swiftly gaining popularity as a way to train Single hidden Layer Feedforward Networks (SLFN) for its attractive properties. ELM is a fast learning network with remarkable generalization performance. Although ELM generally can outperform traditional gradient descent-based algorithms such as Backpropagation, its performance can be highly affected by the random selection of the input weights and hidden biases of SLFN. Moreover, ELM networks tend to have more hidden neurons due to this random selection. In this paper, we propose a new model that uses Competitive Swarm Optimizer (CSO) to optimize the values of the input weights and hidden neurons of ELM. Two versions of ELM are considered: the classical ELM and the regularized version. The goal of the model is to increase the generalization performance, stabilize the classifier, and to produce more compact networks by reducing the number of neurons in the hidden layer. The proposed model is experimented based on 15 medical classification problems. Experimental results demonstrate that the proposed model can achieve better generalization performance with smaller number of hidden neurons and with higher stability. In addition, it requires much less training time compared to other metaheuristic based ELMs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call