Abstract

Pattern recognition systems always misclassify anomalies, which can be dangerous for uninformed users. Therefore, anomalies must be filtered out from each classification. The main challenge for the anomaly filter design is the huge number of possible anomaly samples compared with the number of samples in the training set. Tailoring the filter for the given classifier is just the first step in this reduction. Paper tests the hypothesis that the filter trained in avoiding “near” anomalies will also refuse the “far” anomalies, and the anomaly detector is then just a classifier distinguishing between “far real” and “near anomaly” samples. As a “far real” samples generator was used, a Generative Adversarial Network (GAN) fake generator that transforms normally distributed random seeds into fakes similar to the training samples. The paper proves the assumption that seeds unused in fake training will generate anomalies. These seeds are distinguished according to their Chebyshev norms. While the fakes have seeds within the hypersphere with a given radius, the near anomalies have seeds within the sphere near cover. Experiments with various anomaly test sets have shown that GAN-based anomaly detectors create a reliable anti-anomaly shield using the abovementioned assumptions. The proposed anomaly detector is tailored to the given classifier, but its limitation is due to the need for the availability of the database on which the classifier was trained.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call