Abstract

Recently, significant performance improvements have been achieved in deep learning-based anomaly detection methods by introducing large neural network architectures and complex anomaly scoring functions. However, the computational cost and memory usage required in the inference phase have also increased significantly, thereby limiting their use in real-time applications. In this paper, we propose a score distillation method that adopts the concept of knowledge distillation. An existing high-performance anomaly detection method is used as the teacher. A small neural network is then trained as the student to mimic the scoring function of the teacher. In the inference phase, the anomaly score for a query instance is obtained by a single forward pass through the student network without requiring any complicated computation processes. We demonstrate that the proposed method makes anomaly detection faster and more efficient while maintaining high performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call