Abstract

Extreme learning machine (ELM) is a fast training scheme of single-hidden-layer feedforward neural network. How to further improve the prediction stability and accuracy of ELM in an ensemble learning way becomes one of the hot research topics in the filed of supervised learning. This paper proposes an attribute bagging-based ELM (AB-ELM) which is an ensemble learning system for classification and regression tasks by training the base ELMs on random samples of attributes instead of the entire attribute set. AB-ELM uses the sampling with replacement method to get the multiple randomized attribute subsets so as that the different data subsets can be constructed for the training of base ELMs. After obtaining a set of base ELMs, the weighted averaging method and the weighted voting method are used to generate a combination output, where the weight considers the information amount of training data subset. The relationship between the size of attribute subsets and the size of base ELMs is also discussed in AB-ELM. On 4 classification and 4 regression data sets, we verify the training and testing performances of AB-ELM in comparison with the classical ELM and the voting based ELM (V-ELM). The experimental results show that AB-ELM obtains the better prediction stability and accuracy than the classical ELM and V-ELM and thus demonstrate the effectiveness of AB-ELM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call