Abstract

In recent years, there have been significant advances in deep learning applied to problems in high-level vision tasks (e.g. image classification, object detection, semantic segmentation etc.) which has been met with a great deal of success. State-of-the-art methods that have shown impressive results on recognition tasks typically share a common structure involving stage-wise encoding of the image, followed by a generic classifier. However, these architectures have been shown to be vulnerable to the adversarial perturbations which may undermine the security of the systems supported by deep neural nets. In this work, initially we present rigorous evaluation of adversarial attacks on recent deep learning models for two different high-level tasks (image classification and semantic segmentation). Then we propose a model and dataset independent approach to generate adversarial perturbation and also the transferability of perturbation across different datasets and tasks. Moreover, we analyze the effect of different network architectures which will aid future efforts in understanding and defending against adversarial perturbations. We perform comprehensive experiments on several standard image classification and segmentation datasets to demonstrate the effectiveness of our proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call