Abstract

Extreme Learning Machine (ELM) algorithms have achieved unprecedented performance in supervised machine learning tasks. However, the preconfiguration of the nodes in the hidden layer in ELM models through randomness does not always lead to a suitable transformation of the original features. Consequently, the performance of these models relies on broad exploration of these feature mappings, generally using a large number of nodes in the hidden layer. In this paper, a novel ELM architecture is presented, called Negative Correlation Hidden Layer ELM (NCHL-ELM), based on the Negative Correlation Learning (NCL) framework. This model incorporates a parameter into each node in the original ELM hidden layer, and these parameters are optimized by reducing the error in the training set and promoting the diversity among them in order to improve the generalization results. Mathematically, the ELM minimization problem is perturbed by a penalty term, which represents a measure of diversity among the parameters. A variety of regression and classification benchmark datasets have been selected in order to compare NCHL-ELM with other state-of-the-art ELM models. Statistical tests indicate the superiority of our method in both regression and classification problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call