Abstract

With its simple theory and strong implementation, extreme learning machine (ELM) becomes a competitive single hidden layer feed forward networks for nonlinear multivariate calibration in chemometrics. To improve the generalization and robustness of ELM further, stacked generalization is introduced into ELM to construct a modified ELM model called stacked ensemble ELM (SE-ELM). The SE-ELM is to create a set of sub-models by applying ELM repeatedly to different sub-regions of the spectra and then combine the predictions of those sub-models according to a weighting strategy. Three different weighting strategies are explored to implement the proposed SE-ELM, such as the Winner-takes-all (WTA) weighting strategy, the constraint non-negative least squares (CNNLS) weighing strategy and the partial least squares (PLS) weighting strategy. Furthermore, PLS is suggested to be selected as the optimal weighting method that can handle the multi-colinearity among the predictions yielded by all the sub-models. The experimental assessment of the three SE-ELM models with different weighting strategies is carried out on six real spectroscopic datasets and compared with ELM, back-propagation neural network (BPNN) and Radial basis function neural network (RBFNN), statistically tested by the Wilcoxon signed rank test. The obtained experimental results suggest that, in general, all the SE-ELM models are more robust and more accurate than traditional ELM. In particular, the proposed PLS-based weighting strategy is at least statistically not worse than, and frequently better than the other two weighting strategies, BPNN, and RBFNN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call