Abstract

In ensemble learning, several base learners are combined together in some way to get a stronger learner. Good ensembles are often much more accurate than individual learners that make them up. Ensemble pruning searches for a good subset of ensemble members that performs as well as, or better than the original ensemble. We analyze accuracy, diversity and generalization ability of base learners for classification, then prove that ensemble constructed by learners of better generalization ability performs better in generalization. Then we use Graph Laplacian to evaluate generalization ability of learners on data sets and propose an efficient hybrid metric based individual contribution estimating method that fully reflects performance of member classifiers. A multi-objective sort method is used to get the best order under hybrid metric. Experimental results show that the proposed method is effective.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call