Abstract

We applied convolutional versions of a "standard" autoencoder (CAE), a variational autoencoder (VAE) and an adversarial autoencoder (AAE) to two different publicly available datasets and compared their anomaly detection performances. We used the MNIST dataset [14] as a simple anomaly detection scenario. The CIFAR10 dataset [13] was used to examine the autoencoders in a more complex anomaly detection task. The anomaly detection performance of our different autoencoder types is compared in a qualitative and quantitative manner. The time needed for training the models is measured to capture their complexity. The simplest model demanding the simplest training, the CAE, computes results which are nearly as accurate and for some cases even better than results achieved by the VAE and AAE. We show that all three autoencoder types computed convincing anomaly detection results for the more simple-structured MNIST scenario. However, none of the autoencoder types proved to capture a good representation of the relevant features of the more complex CIFAR10 dataset, leading to moderately good anomaly detection performances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call