Abstract

Extreme Learning Machine (ELM) proposed by Huang et al. [2] is a novel algorithm for single hidden layer feedforward neural networks (SLFNs) with extremely fast learning speed and good generalization performance. When new hidden nodes are added to the existing network, retraining the network would be time consuming, and EM-ELM [13] was proposed to calculate the output weights incrementally. However there are still two issues in EM-ELM: first, the initial hidden layer output matrix may be rank deficient thus the computation will loss accuracy; second, EM-ELM cannot always get good generalization performance due to overfitting. So we propose the improved version of EM-ELM based on regularization method called Incremental Regularized Extreme Learning Machine (IR-ELM). When new hidden node is added one by one, IR-ELM can update output weights recursively in a fast way. Enhancement of IR-ELM (EIR-ELM) that has a selection of hidden nodes to be added to the network is also introduced in this paper. Empirical studies on benchmark data sets for regression and classification problems have shown that IR-ELM (EIR-ELM) always gets better generalization performance than EM-ELM with the similar training time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call