Abstract

We describe a probabilistic, nonparametric method for anomaly detection, based on a squared-loss objective function which has a simple analytical solution. The method emerges from extending recent work in nonparametric least-squares classification to include a “none-of-the-above” class which models anomalies in terms of non-anamalous training data. The method shares the flexibility of other kernel-based anomaly detection methods, yet is typically much faster to train and test. It can also be used to distinguish between multiple inlier classes and anomalies. The probabilistic nature of the output makes it straightforward to apply even when test data has structural dependencies; we show how a hidden Markov model framework can be incorporated in order to identify anomalous subsequences in a test sequence. Empirical results on datasets from several domains show the method to have comparable discriminative performance to popular alternatives, but with a clear speed advantage.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.