Abstract

For the past few years, deep learning (DL) robustness (i.e. the ability to maintain the same decision when inputs are subject to perturbations) has become a question of paramount importance, in particular in settings where misclassification can have dramatic consequences. To address this question, authors have proposed different approaches, such as adding regularizers or training using noisy examples. In this paper we introduce a regularizer based on the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DL architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness on classical supervised learning vision datasets for various types of perturbations. We also show it can be combined with existing methods to increase overall robustness.

Highlights

  • Deep learning (DL) networks provide state-of-the-art performance for many machine-learning tasks [1, 2]

  • We show in subsection B) that by using the proposed regularizer we are able to increase robustness for random perturbations and weak adversarial attacks

  • In this paper we have introduced a definition of robustness alongside an associated regularizer

Read more

Summary

Introduction

Deep learning (DL) networks provide state-of-the-art performance for many machine-learning tasks [1, 2]. Their ability to achieve good generalization is often explained by the fact they use very few priors about data [3]. Robustness refers to the ability of a classifier to infer correctly even when the inputs (or the parameters of the classifier) are subject to perturbations These perturbations can be due to general factors – such as noise, quantization of inputs or parameters, and adversarial attacks – as well as application specific ones – such as the use of a different camera lens, brightness exposure, or weather, in an imaging task.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call