Abstract

Extreme learning machine (ELM) is a generalized single hidden layer feed forward network in which weights and biases between the input layer and hidden layer are randomly assigned whereas, the weights between the hidden layer and the output layer are analytically determined. The optimal number of hidden neurons in ELM is evaluated by varying the number of hidden neurons in some range. Most of the recently published articles, quote the number of hidden neurons at which the ELM gives the maximum testing accuracy. Ideally, the model should not be selected using the testing accuracy because testing dataset is unseen. Selecting the number of hidden neurons by observing the training accuracy might be misleading as solution with higher training accuracy might suffers from overfitting. In this work, we have developed a variant of ELM which does not required manual tuning of the number of hidden neurons. The proposed ELM variant also has a minimum network structure, with slightly less testing performance compared to original ELM. In original ELM highest testing performance is quoted without any description to select the optimal number of hidden neurons. The proposed ELM variant initially set the number of hidden neurons to higher value and then removes the highly correlated hidden neurons to minimize the network structure. The proposed work also removes the overfitting problem in original ELM. Experimental results have been shown on some popular datasets taken from keel repository.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call