Abstract

Fault detection or anomaly detection relies heavily on learning from datasets where only normal samples are available, resulting in the emergence of numerous one-class classification (OCC) methods. However, learning discriminative deep representatives with good generalization from cross-domain positive samples remains challenging. Therefore, this work proposes an end-to-end framework, deep transfer one-class classification (DTOCC) for unsupervised fault detection, which combines adversarial generative OCC and distribution alignment from the perspective of manifold learning. Specifically, pseudo-negative samples are generated outside the positive manifold, facilitating the model to learn discrimination with respect to normal and anomaly. Further, cross-domain positive samples are aligned in log-Euclidean manifold space to enhance representation learning. Then, we provide the specific implementations for fault detection and validate its superiority through case studies on multi-class and run-to-failure datasets, simulating both offline and online scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call