Abstract
While diversity has been argued to be the rationale for the success of an ensemble of classifiers, little has been said on how uniform use of the feature space influences classification error. Following an observation from a recent result, published elsewhere, among several ensembles of decision trees, those with a more uniform feature-use frequency also have a smaller classification error. This paper provides further support to such hypothesis. We have conducted experiments over 60 classification datasets, using 42 different types of decision tree ensembles, to test our hypothesis. Our results validate the hypothesis, prompting the design of ensemble construction methods that make a more uniform use of features, for classification problems of low and medium dimensionality.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Engineering Applications of Artificial Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.