Abstract

We address the problem of anomaly detection (AD) by a deep network pretrained using self-supervised learning for an auxiliary geometric transformation (GT) classification task. Our key contribution is a novel loss function that augments the standard cross-entropy by an additional term that plays a significant role in the later stages of self-supervised learning. The proposed enabling innovation is a triplet centre loss with an adaptive margin and a learnable metric, which relentlessly drives the GT classes to exhibit continuously improving compactness and inter-class separation. The pretrained network is finetuned for the downstream task using non-anomalous data only, and a GT model for the data is constructed. Anomalies are detected by fusing the output of several decision functions defined using the learnt GT class model. In contrast to the majority of existing methods, our approach strictly adheres to the pure AD design philosophy, which relies on the use of purely non-anomalous data for the design. Extensive experiments on four publicly available AD datasets demonstrate the effectiveness of the proposed contributions and lead to significant performance gains compared to the state-of-the-art (1.8% on F-MNIST, 1.0% on CIFAR-10, 1.2% on CIFAR-100, and 1.7% on CatVsDog). https://github.com/12sf12/Deep-Anomaly-Detection

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.