Abstract

Diversity among base classifiers is known to be a key driver for the construction of an effective ensemble classifier. Several methods have been proposed to construct diverse base classifiers using artificially generated training samples. However, in these methods, diversity is often obtained at the expense of the accuracy of base classifiers. Inspired by the localized generalization error model a new sample generation method is proposed in this study. When preparing different training sets for base classifiers, the proposed method generates samples located within limited neighborhoods of the corresponding training samples. The generated samples are different with the original training samples but they also expand different parts of the original training data. Learning these datasets can result in a set of base classifiers that are accurate in different regions of the input space as well as maintaining appropriate diversity. Experiments performed on 26 benchmark datasets showed that: (1) our proposed method significantly outperformed some state-of-the-art ensemble methods in term of the classification accuracy; (2) our proposed method was significantly more efficient that other sample generation based ensemble methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.