Abstract

Extreme learning machine (ELM) is an emergent method for train- ing single hidden layer feedforward neural networks (SLFNs) with extremely fast training speed, easy implementation and good generalization performance. This work presents effective ensemble procedures for combining ELMs by exploiting di- versity. A large number of ELMs are initially trained in three different scenarios: the original feature input space, the obtained feature subset by forward selection and different random subsets of features. The best combination of ELMs is con- structed according to an exact ranking of the trained models and the useless net- works are discarded. The experimental results on several regression problems show that robust ensemble approaches that exploit diversity can effectively improve the performance compared with the standard ELM algorithm and other recent ELM extensions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call