Abstract

Visual adversarial examples are images and videos purposefully perturbed to mislead machine learning models. This chapter presents an overview of methods that craft adversarial perturbations to generate visual adversarial examples for image classification, object detection, motion estimation and video recognition tasks. We define the key properties of an adversarial attack and the types of perturbations that an attack generates. We then analyze the main design choices for methods that craft adversarial attacks for images and videos, and discuss the knowledge they use of the target model. Finally, we review defense mechanisms that increase the robustness of machine learning models to adversarial attacks or detect manipulated input data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call