We address the problem of anomaly detection (AD) by a deep network pretrained using self-supervised learning for an auxiliary geometric transformation (GT) classification task. Our key contribution is a novel loss function that augments the standard cross-entropy by an additional term that plays a significant role in the later stages of self-supervised learning. The proposed enabling innovation is a triplet centre loss with an adaptive margin and a learnable metric, which relentlessly drives the GT classes to exhibit continuously improving compactness and inter-class separation. The pretrained network is finetuned for the downstream task using non-anomalous data only, and a GT model for the data is constructed. Anomalies are detected by fusing the output of several decision functions defined using the learnt GT class model. In contrast to the majority of existing methods, our approach strictly adheres to the pure AD design philosophy, which relies on the use of purely non-anomalous data for the design. Extensive experiments on four publicly available AD datasets demonstrate the effectiveness of the proposed contributions and lead to significant performance gains compared to the state-of-the-art (1.8% on F-MNIST, 1.0% on CIFAR-10, 1.2% on CIFAR-100, and 1.7% on CatVsDog). https://github.com/12sf12/Deep-Anomaly-Detection
Read full abstract