Abstract

Extreme Leaning Machine (ELM) simply randomly assigns input weights and biases, ineluctably leading to certain stochastic behaviors and reducing generalization performance. In this paper, we propose a meta-learning model of ELM, called Meta-ELM. The Meta-ELM architecture consists of several base ELMs and one top ELM. Therefore, the Meta-ELM learning proceeds in two stages. First, each base ELM is trained on a subset of the training data. Then, the top ELM is learned with the base ELMs as hidden nodes. Theoretical analysis and experimental results on a few artificial and benchmark regression datasets show that the proposed Meta-ELM model is feasible and effective.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call