Abstract

Ensembles of outlier detectors are drawing increasing attentions recently, in spite of the difficulty on developing ensembles in the framework of unsupervised learning. We have noted that existing outlier ensembles often use certain fusion rules (e.g. majority voting) to aggregate individual learners. Theoretically, these individuals are assumed to be error-independent so that single models can be outperformed by the ensemble. However, it is of great difficulty to satisfy this assumption in practical applications. By dynamic selecting more competent individual(s) for each test pattern, this problem can be alleviated effectively. Inspired by this idea, this paper proposes a dynamic ensemble outlier detection model using one-class classifiers as base learners. As the competences of base detectors are estimated totally on data points in the validation set, its impact on the selection is significant. In order to achieve an efficient selection, we propose an adaptive k-nearest neighbor (KNN) rule, instead of traditional KNN algorithm, to constitute the validation set for each test pattern. Our adaptive KNN rule firstly uses algorithm support vector data description (SVDD) to mine the local area where class conditional probabilities are not constant in terms of the corresponding test pattern. Competences estimated with neighbor patterns in this area should thus be more accurate than that by traditional KNN rule. A probabilistic model that uses posterior probabilities of one-class classifiers is used then to estimate classifier competences. We present experimental evidence of the detection performance improvement over single models and over a variety of static ensemble models, by using data sets from UCI repository.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call