Abstract

Extreme learning machine (ELM) is a special type of single hidden layer feedforward neural network that emphasizes training speed and optimal generalization. The ELM model proposes that the weights of hidden neurons need not be tuned, and the weights of output neurons can be calculated by finding the Moore-Penrose generalized inverse method. Thus, the ELM classifier is suitable to use in a homogeneous ensemble model due to the untuned random hidden weights which promote diversity even with the same training data. This paper studies the effectiveness of the ELM ensemble models in solving small sample-sized classification problems. The research involves two variants of the ensemble model: the normal ELM ensemble with majority voting (ELE), and the random subspace method (RS-ELM). To simulate the small sample cases, only 30% of the total data will be used as the training data. Experiment results show that the RS-ELM model can outperform a multi-layer perceptron (MLP) model under the assumptions of a Friedman test. Furthermore, the ELE model has similar performance as an MLP model under the same assumptions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call