Abstract

Extreme learning machine (ELM) has become a popular topic in machine learning in recent years. ELM is a new kind of single-hidden layer feedforward neural network with an extremely low computational cost. ELM, however, has two evident drawbacks: 1) the output weights solved by Moore-Penrose generalized inverse is a least squares minimization issue, which easily suffers from overfitting and 2) the accuracy of ELM is drastically sensitive to the number of hidden neurons so that a large model is usually generated. This brief presents a sparse Bayesian approach for learning the output weights of ELM in classification. The new model, called Sparse Bayesian ELM (SBELM), can resolve these two drawbacks by estimating the marginal likelihood of network outputs and automatically pruning most of the redundant hidden neurons during learning phase, which results in an accurate and compact model. The proposed SBELM is evaluated on wide types of benchmark classification problems, which verifies that the accuracy of SBELM model is relatively insensitive to the number of hidden neurons; and hence a much more compact model is always produced as compared with other state-of-the-art neural network classifiers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.