Abstract

In this paper, we propose a multi-class boosting method (multiBoost.imb) to address difficulties of learning from imbalanced data set as well as employment of stable base learners. A random resampling strategy is incorporated to diversify the training data set and to recover balance among all classes. Extending AdaBoost by adding an error adjustment parameter, early termination in the training phase is avoided in multi-class scenarios. Experiments were conducted using three public face databases and two synthetic data sets. It is demonstrated that stable learners can be used in our ensemble method. In the multi-class problems, the ensemble overcomes the early termination even when stable learner is employed. It was evident that our method improves learning performance in all cases, especially when imbalance ratio is high. Comparison to the SMOTEboost and RUSboost also reveals the advantage of our method in handling multi-class, imbalanced face recognition problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call