Abstract
Unsupervised anomaly detection is a challenging problem, where the aim is to detect irregular data instances. Interestingly, generative models can learn data distribution, and thus have been proposed for anomaly detection. In this direction, the variational autoencoder (VAE) is popular, as it enforces an explicit probabilistic interpretation of the latent space. We note that there are other generative autoencoders (AEs) such as the denoising AE (DAE) and contractive AE (CAE), which also model data generation process without enforcing an explicit probabilistic latent space interpretation. While it is intuitively straightforward to see the benefit of a latent space with explicit probabilistic interpretation for generative tasks, it is unclear how this can be crucial for anomaly detection problems. Consequently, our exposition in this paper is to investigate the extent to which different latent space attributes of AEs impact their performances for anomaly detection tasks. We take the conventional and deterministic AE that we refer to as plain AE (PAE) as the baseline for performance comparison. Our results obtained using five different datasets reveal that an explicit probabilistic latent space is not necessary for good performance. The best results on most of the datasets are obtained using CAE, which enjoys stable latent representations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.