Abstract
Extreme learning machine (ELM) is a non-iterative algorithm for training single-hidden layer feed-forward networks, whose training speed is much faster than those of conventional neural networks. However, its objective is only to minimize the empirical risk, which may cause overfitting easily. To overcome this defect, this paper proposes a novel algorithm named LGE2LM based on the localized generalization error model which provides an upper bound of generalization error through adopting the stochastic sensitivity. LGE2LM aims to improve the generalization capability of ELM by using the regularization technique which makes a trade-off between the empirical risk and the stochastic sensitive measure. The essence of LGE2LM is a quadratic problem without iterative process. Similar to ELM, all the parameters of this new algorithm are obtained without tuning, which makes the efficiency of LGE2LM much higher than those of traditional neural networks. Several experiments conducted on both artificial and real-world datasets show that LGE2LM has much better generalization capability and stronger robustness in comparison with ELM.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have