Abstract

Extreme learning machine (ELM) has been attracted increasing attentions for its fast learning speed and excellent generalization performance. However, the prediction result of a single ELM regression model is usually unstable due to the randomly generating of the input weights and hidden layer bias. To overcome this drawback, an ensemble form of ELM, termed as subagging ELM, was proposed and used for spectral quantitative analysis of complex samples. In the approach, a series of ELM sub-models was built by randomly selecting a certain number of samples from the original training set without replacement, and then the predictions of these sub-models were combined by a simple averaging way to give the final ensemble prediction. The performance of the method was tested with fuel oil and blood samples. Compared to a single ELM model, the results confirm that subagging ELM can achieve much better stability and higher accuracy than ELM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call