Abstract

Machine anomaly detection is the task of detecting machine abnormal condition via the collected monitoring data. Recently, increasing attention has focused on autoencoder (AE) based unsupervised anomaly detection (UAD) for mechanical equipment. Typically, the UAD is based on the assumption that all of the training data are normal. In real scenarios, however, the raw monitoring data may be polluted by abnormal data due to machine failures, environmental noise, sensor failures, etc. Without effective regularization, AE-based methods would overfit these polluted data. To address this issue, we design an effective loss, called core loss, which can perform AE-based UAD in a both model-agnostic and end-to-end manner under data pollution. First, we experimentally observe that the AE shows the self-clean characteristic (SCC) under data pollution, i.e., the network will prioritize learning the normal data from the polluted training data. Next, we focus on the core samples, which make up a clean and representative subset of the unlabeled data. Based on the SCC, we propose the core prior that reconstruction errors in the middle with high density correspond to the core samples. Finally, to enhance the SCC, the core loss exploits the core prior to effectively mine and learn the core samples. The experimental results show that the core loss can effectively improve the performance of different network structures on both clean and polluted data. The corresponding Python codes are available at https://github.com/albertszg/Coreloss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call