Abstract

Deep learning is a type of machine learning that adapts a deep hierarchy of concepts. Deep learning classifiers link the most basic version of concepts at the input layer to the most abstract version of concepts at the output layer, also known as a class or label. However, once trained over a finite set of classes, some deep learning models do not have the power to say that a given input does not belong to any of the classes and simply cannot be linked. Correctly invalidating the prediction of unrelated classes is a challenging problem that has been tackled in many ways in the literature. Novelty detection gives deep learning the ability to output “do not know” for novel/unseen classes. Still, no attention has been given to the security aspects of novelty detection. In this paper, we consider the case study of abstraction-based novelty detection and show its weakness against adversarial samples. We show the feasibility of crafting adversarial samples that bypass the novelty detection monitoring and fool the deep learning classifier at the same time. In other words, novelty detection itself ends up as an attack surface. Moreover, we call for further research from a defender’s point of view. We investigate auto-encoders as a plausible defense mechanism and assess its performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call