Abstract

Extreme Learning Machine (ELM) is a learning algorithm proposed recently to train single hidden layer feed forward networks (SLFN). It has many attractive properties that include better generalization performance and very fast learning. ELM starts by assigning random values to the input weights and hidden biases and then in one step it determines the output weights using Moore-Penrose generalized inverse. Despite the aforementioned advantages, ELM performance might be affected by the random initialization of weights and biases or by the large generated network which might contain unnecessary number of neurons. In order to increase the generalization performance and to produce more compact networks, a hybrid model that combines ELM with competitive swarm optimizer (CSO) is proposed in this paper. The proposed model (CSONN-ELM) optimizes the weights and biases and dynamically determines the most appropriate number of neurons. To evaluate the effectiveness of the CSONN-ELM, it is experimented using 23 benchmark datasets, and compared to a set of static rules extracted from literature that are used to determine the number of neurons of SLFN. Moreover, it is compared to two dynamic methods that are used to enhance the performance of ELM, that are Optimally pruned ELM (OP-ELM) and metaheuristic based ELMs (Particle Swarm Optimization-ELM and Differential Evolution-ELM). The obtained results show that the proposed method enhances the generalization performance of ELM and overcomes the static and dynamic methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call