Abstract

Industrial image anomaly detection under the setting of one-class classification has significant practical value. However, most existing models face challenges in extracting separable feature representations when performing feature embedding and in constructing compact descriptions of normal features when performing one-class classification. One direct consequence is that most models perform poorly in detecting logical anomalies that violate contextual relationships. Focusing on more effective and comprehensive anomaly detection, we propose a network based on self-supervised learning and self-attentive graph convolution (SLSG). SLSG uses a generative pre-training network to assist the encoder in learning the embedding of normal patterns and the reasoning of positional relationships. Subsequently, we introduce pseudo-prior knowledge of anomalies in SLSG using simulated abnormal samples. By comparing the simulated anomalies, SLSG can better summarize the normal patterns and narrow the hypersphere used for one-class classification. In addition, with the construction of a more general graph structure, SLSG comprehensively models the dense and sparse relationships among the elements in an image, which further strengthens the detection of logical anomalies. Extensive experiments on benchmark datasets show that SLSG achieves superior anomaly detection performance, demonstrating the effectiveness of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call