Abstract

Extreme Learning Machine (ELM) is a popular method in machine learning with extremely few parameters, fast learning speed and model efficiency. While a significant drawback is that ELM is restricted by its single-layer structure and prized analytic solution. If simply stacking more layers, analytic solution of ELM will be intractable. Then gradient-based optimization method is preferred and that results into normal neural networks. Recently a multi-layer ELM (ML-ELM) is proposed to learn compact feature with a series of ELM auto-encoders, which attempts to extend ELM to a deeper network without sacrificing elegant solution. Compared with ML-ELM and following hierarchical ELM, we introduce a sparse Bayesian learning method to imply a stronger sparse regularization and prune network structure. Experiments on classification verify the efficiency of our proposed new multi-layer ELM for unsupervised feature learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call