Abstract

Deep neural networks are generally trained using large amounts of data to achieve state-of-the-art accuracy in many possible computer vision and image analysis applications ranging from object recognition to natural language processing. It is also claimed that these networks can memorize the data which can be extracted from the network parameters such as weights and gradient information. The adversarial vulnerability of the deep networks is usually evaluated on the unseen test set of the databases. If the network is memorizing the data, then the small perturbation in the training image data should not drastically change its performance. Based on this assumption, we first evaluate the robustness of deep neural networks on small perturbations added in the training images used for learning the parameters of the network. It is observed that, even if the network has seen the images it is still vulnerable to these small perturbations. Further, we propose a novel data augmentation technique to increase the robustness of deep neural networks to such perturbations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call