Abstract

Extreme learning machines have been applied successfully to many real-world applications, due to their faster training speed and good performance. However, in order to guarantee the convergence of the ELM algorithm, it initially requires a large number of hidden nodes. In addition, extreme learning machines have two drawbacks: over-fitting and the sensitivity of accuracy to the number of hidden nodes. The aim of this paper is to propose a new smoothing \(L_{1/2}\) extreme learning machine with regularization to overcome these two drawbacks. The main advantage of the proposed approach is to reduce weights to smaller values during the training, and such nodes with sufficiently small weights can eventually be removed after training so as to obtain a suitable network size. Numerical experiments have been carried out for approximation problems and multi-class classification problems, and preliminary results have shown that the proposed approach works well.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.