Abstract

Ensemble of Classifiers are composed of parallel-organized components (individual classifiers) whose outputs are combined using a combination method that provides the final output for an ensemble. In this context, Dynamic Ensemble Systems (DES) is an ensemble-based system that, for each test pattern, a different ensemble structure is defined, in which a subset of classifiers is selected from an initial pool of classifiers. During the selection process of a DES, any criterion can be used, being the most important ones accuracy and distance. Distance measures are used to assess the distance of the classifier outputs within a validation set and the main examples of this measure are diversity and similarity. In this paper, we investigate the impact of selection criteria in DES methods. More specifically, we focus on the use of different distance measures (diversity and similarity) as selection criteria. In other to do this, an empirical analysis has been conducted using six different DES methods (three of them are existing methods and the remaining three are proposed in this paper) and with 20 different classification datasets. Our findings indicated that a distance measure improves the overall performance of the state-of-the-art ensemble generation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call