Abstract

This paper studies adversarial attacks and defences against deep learning models trained on infrared data to classify the presence of humans and detect their bounding boxes, which differently from the standard RGB case is an open research problem with multiple consequences related to safety and secure artificial intelligence applications. The paper has two major contributions. Firstly, we study the effectiveness of the Projected Gradient Descent (PGD) adversarial attack against Convolutional Neural Networks (CNNs) trained exclusively on infrared data, and the effectiveness of adversarial training as a possible defense against the attack. Secondly, we study the response of an object detection model trained on infrared images under adversarial attacks. In particular, we propose and empirically evaluate two attacks: one classical attack from the literature on object detection, and a new hybrid attack which exploits a common CNN base architecture of the classifier and the object detector. We show for the first time that adversarial attacks weaken the performance of classification and detection models trained on infrared images only. We also prove that the defense adversarial training optimized for the infinity norm increases the robustness of different classification models trained on infrared data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call