Abstract

Feature subsampling techniques help to create diverse for classifiers ensemble. In this article we investigate two feature subsampling-base ensemble methods - Random Subspace Method (RSM) and Rotation Forest Method (RFM) to explore their usability with different learning algorithms and the robust on noise data. The experiments show that RSM with IBK work better than RFM and AdaBoost, and RFM with tree classifier and rule classifier achieve prominent improvement than others. We also find that Logistic algorithm is not suitable for any of the three ensembles. When adding classification noise into original data sets, ensembles outperform singles at lower noisy level but fail to maintain such superior at higher noisy level.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call