Abstract

Extreme learning machine (ELM) is a fast learning algorithm for the single-hidden layer feedforward neural networks. However, usually we cannot guarantee the stability of the ELM because the parameters of the ELM are generated randomly, such as its biases of the hidden layer and the connecting weights between the input layer and the hidden layer. Besides, it is hard for a single model to achieve high predicted accuracy on the dataset with low-quality data. In this paper, we first propose a modified residual ELM (R-ELM) to improve the ELM’s learning performance. In R-ELM, the first ELM is trained by the original dataset and the $m$ -th ( $m > 1$ ) ELM will be trained by the residuals between the ground truths and the predicted results of the previous ensemble model (with $m-1$ ELMs). R-ELM (with $m$ ELMs) is built in the direction of the error reduction by calculating the $m$ -th ELM’s optimal weight which is determined by the loss function of the R-ELM. As a result, R-ELM can remember almost all information of the training set. However, this ability does not assure a similar performance on the testing dataset. In view of this problem, we add ${L_{2}}$ regularization to the loss function of the R-ELM (RR-ELM) to avoid the overfitting problem of R-ELM. In RR-ELM, ${L_{2}}$ regularization is employed to encourage each ELM to ignore the unnecessary information of the training set. In order to verify the effectiveness of the two proposed algorithms, the real data from the blast furnace are engaged to perform the experiments. Experimental results illustrate that the proposed RR-ELM and the R-ELM are more stable than the single ELM. These results also demonstrate that the two proposed methods are more accurate than the average outputs of a group of ELMs, the ELM, and the support vector regression.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.