Abstract

Extreme Learning Machine(ELM) proposed by Huang et al is a new and simple algorithm for single hidden layer feedforward neural network(SLFN) with extreme fast learning speed and good generalization performance.When new hidden nodes are added to existing network retraining the network would be time consuming, and EM-ELM is proposed to calculate the output weight incrementally.However there are still two issues in EM-ELM:1.the initial hidden layer output matrix may be nearly singular thus the computation will loss accuracy;2.the algorithms can’t always get good generalization performance due to overfitting.So we propose the improved version of EM-ELM based on regularization method called Incremental Regularized Extreme Learning Machine(IR-ELM).When new hidden node is added one by one,IR-ELM can update output weight recursively in a fast way.Empirical studies on benchmark data sets for regression and classification problems have shown that IR-ELM always get better generalization performance than EM-ELM with the similar training time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call