Abstract

Given an unsupervised outlier detection task, how should one select i) a detection algorithm, and ii) associated hyperparameter values (jointly called a model)? E ective outlier model selection is essential as di erent algorithms may work well for varying detection tasks, and moreover their performance can be quite sensitive to the values of the hyperparameters (HPs). On the other hand, unsupervised model selection is notoriously difficult, in the absence of hold-out validation data with ground-truth labels. Therefore, the problem is vastly understudied in the outlier mining literature. There exists a body of work that propose internal model evaluation strate- gies for selecting a model. These so-called internal strategies solely rely on the input data (without labels) and the output (outlier scores) of the candidate models. In this paper, we rst survey internal model evaluation strategies including both those proposed speci cally for outlier detection, as well as those that can be adapted from the unsupervised deep representation learning literature. Then, we investigate their e ectiveness empirically in comparison to simple baselines such as random selection and the popular state-of-the-art detector Isolation Forest (iForest) with default HPs. To this end, we set up (and open-source) a large testbed with 39 detection tasks and 297 candidate models comprised of 8 different detectors and various HP con gurations. We evaluate internal strategies from 7 di erent families on their ability to discriminate between models w.r.t. detection performance, without using any labels. Our study reports a striking nding, that none of the existing and adapted strategies would be practically useful: stand-alone ones are not signi cantly di erent from random, and consensus-based ones do not outperform iForest (w/ default HPs) while being more expensive (as all candidate models need to be trained for evaluation). Our survey stresses the importance of and the standing need for e ective unsupervised outlier model selection, and acts as a call for future work on the problem.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.