Abstract

Using knowledge distillation for unsupervised anomaly detection problems is more efficient. Recently, a reverse distillation (RD) model has been presented a novel teacher-student (T-S) model for the problem [7]. In the model, the student network uses the one-class embedding from the teacher model as input with the goal of restoring the teacher's rep-resentations. The knowledge distillation starts with high-level abstract presentations and moves down to low-level aspects using a model called one-class bottleneck embedding (OCBE). Although its performance is expressive, it still leverages the power of transforming input images before applying this architecture. Instead of only using raw images, in this paper, we transform them using augmentation techniques. The teacher will encode raw and transformed inputs to get raw representation (encoded from raw inputs) and transformed representation (encoded from transformed inputs). The student must restore the transformed representation from the bottleneck to the raw representation. Testing results obtained on benchmarks for AD and one-class novelty detection showed that our proposed model outperforms the SOTA ones, proving the utility and applicability of the suggested strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call