Abstract

The current state-of-the-art anomaly detection methods based on knowledge distillation (KD) typically depend on smaller student networks or reverse distillation to address vanishing representations discrepancy on anomalies. These methods often struggle to achieve precise detection when dealing with complex texture backgrounds containing anomalies due to the similarity between anomalous and non-anomalous regions. Therefore, we propose a new paradigm-Cosine Similarity Knowledge Distillation (CSKD), for surface anomaly detection and localization. We focus on the superior performance of the same deeper teacher and student encoders by the distillation loss in traditional knowledge distillation-based methods. Essentially, we introduce the Attention One-Class Embedding (AOCE) in the student network to enhance learning capabilities and reduce the effect of the teacher-student (T-S) model on response similarity in anomalous regions. Furthermore, we find the optimal models by different classes' hard-coded epochs, and an adaptive optimal model selection method is designed. Extensive experiments on the MVTec dataset with 99.2% image-level AUROC and 98.2%/94.7% pixel-level AUROC/PRO demonstrate that our method outperforms existing unsupervised anomaly detection algorithms. Additional experiments on DAGM dataset, and one-class anomaly detection benchmarks further show the superiority of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call