Abstract
Anomaly detection is of considerable importance in areas ranging from industrial production over financial transaction to medical diagnosis. Due to the extreme imbalance of anomaly detection datasets, semi-supervised anomaly detection methods based on deep generative models that only use normal samples in the training stage are shining in various fields. However, since real-world training datasets are inevitably polluted by noise samples and abnormal samples, the deployment of semi-supervised anomaly detection methods is being greatly challenged, and the actual effect is not satisfactory. In our opinion, the most fundamental reason might be that the latent representation of normal samples and abnormal samples learned by such methods are entangled. To tackle these problems, we propose to regularize latent representation learned by deep generative model through mutual information maximization and provide theoretical justification that the latent representations learned by our method are far away from abnormal. In addition, we further proposed a technique named adaptive filter that can discard noise samples and empirically show the effects to stabilize and enhance the model. We extensively evaluate our proposed method on tabular, image, and real-world datasets to show excellent effectiveness and robustness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.