Abstract

Diversity among base classifiers is one of the key issues in classifier combination. Although the Eigenclassifiers method proposed by Ulaş et al. (2012) aims to create uncorrelated base classifier outputs, however for multiclass classification problems, correlation among base classifier outputs arise due to the redundant features in the transformed classifier output space, which causes higher estimator variance and lower prediction accuracy. In this paper, we extend Eigenclassifiers method to obtain truly uncorrelated base classifiers. We also generalize the distribution on base classifier outputs from unimodal to multimodal, which lets us handle the class imbalance problem. We also aim to answer the question of which classifier fusion method should be used for a given dataset. In order to answer this question, we generate a dataset by calculating the performances of ten different fusion methods on 38 different datasets. We investigate accuracy–diversity relationship of ensembles on this experimental dataset by using eigenvalue distributions and divergence metrics defined by Kuncheva and Whitaker (2001). We obtain basic rules which can be used to decide on a fusion method given a dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.