Abstract

As a new neural network model, extreme learning machine (ELM) has a good learning rate and generalization ability. However, ELM with a single hidden layer structure often fails to achieve good results when faced with large-scale multi-featured problems. To resolve this problem, we propose a multi-layer framework for the ELM learning algorithm to improve the model’s generalization ability. Moreover, noises or abnormal points often exist in practical applications, and they result in the inability to obtain clean training data. The generalization ability of the original ELM decreases under such circumstances. To address this issue, we add model bias and variance to the loss function so that the model gains the ability to minimize model bias and model variance, thus reducing the influence of noise signals. A new robust multi-layer algorithm called ML-RELM is proposed to enhance outlier robustness in complex datasets. Simulation results show that the method has high generalization ability and strong robustness to noise.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call