Abstract

Extreme Learning Machine (ELM), proposed by Huang et al. in 2004 for the first time, performs better than traditional learning machines such as BP networks and SVM in some applications. This paper attempts to give an oscillation bound of the generalization performance of ELM and a reason why ELM is not sensitive to the number of hidden nodes, which are essential open problems proposed by Huang et al. in 2011. The derivation of the bound is in the framework of statistical learning theory and under the assumption that the expectation of the ELM kernel exists. It turns out that our bound is consistent with the experimental results about ELM obtained before and predicts that overfitting can be avoided even when the number of hidden nodes approaches infinity. The prediction is confirmed by our experiments on 15 data sets using one kind of activation function with every parameter independently drawn from the same Guasssian distribution, which satisfies the assumption above. The experiments also showed that when the number of hidden nodes approaches infinity, the ELM kernel with the activation is insensitive to the kernel parameter.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call