Abstract

As a commonly used ensemble method, AdaBoost has drawn much consideration in the field of machine learning. However, AdaBoost is highly sensitive to outliers. The performance of AdaBoost may be greatly deteriorated when the training samples are polluted by outliers. For binary and multi-class classifications, there have emerged many approaches to improving the robustness of AdaBoost against outliers. Unfortunately, there are too few researches on enhancing the robustness of AdaBoost against outliers in the case of one-class classification. In this study, the exponential loss function of AdaBoost is replaced by a more robust one to improve the anti-outlier ability of the conventional AdaBoost based ensemble of one-class support vector machines (OCSVMs). Furthermore, based on the redesigned loss function, the update formulae for the weights of base classifiers and the probability distribution of training samples are reformulated towards the AdaBoost ensemble of OCSVMs. The empirical error upper bound is derived from the theoretical viewpoint. Experimental outcomes upon the artificial and benchmark data sets show that the presented ensemble approach is more robust against outliers than its related methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call