Defending Against Physical Adversarial Patch attacks On Infrared Human Detection

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Infrared detection is an emerging technique for safety-critical tasks owing to its remarkable anti-interference capability. However, recent studies have revealed that it is vulnerable to physically-realizable adversarial patches, posing risks in its real-world applications. To address this problem, we are the first to investigate defense strategies against adversarial patch attacks on infrared detection, especially human detection. We propose a straightforward defense strategy, patch-based occlusion-aware detection (POD), which efficiently augments training samples with random patches and subsequently detects them. POD not only robustly detects people but also identifies adversarial patch locations. Surprisingly, while being extremely computationally efficient, POD easily generalizes to state-of-the-art adversarial patch attacks that are unseen during training. Furthermore, POD improves detection precision even in a clean (i.e., no-attack) situation due to the data augmentation effect. Our evaluation demonstrates that POD is robust to adversarial patches of various shapes and sizes. The effectiveness of our baseline approach is shown to be a viable defense mechanism for real-world infrared human detection systems, paving the way for exploring future research directions.

Similar Papers
  • Research Article
  • Cite Count Icon 48
  • 10.3390/rs14215298
Adversarial Patch Attack on Multi-Scale Object Detection for UAV Remote Sensing Images
  • Oct 23, 2022
  • Remote Sensing
  • Yichuang Zhang + 6 more

Although deep learning has received extensive attention and achieved excellent performance in various scenarios, it suffers from adversarial examples to some extent. In particular, physical attack poses a greater threat than digital attack. However, existing research has paid less attention to the physical attack of object detection in UAV remote sensing images (RSIs). In this work, we carefully analyze the universal adversarial patch attack for multi-scale objects in the field of remote sensing. There are two challenges faced by an adversarial attack in RSIs. On one hand, the number of objects in remote sensing images is more than that of natural images. Therefore, it is difficult for an adversarial patch to show an adversarial effect on all objects when attacking a detector of RSIs. On the other hand, the wide height range of the photography platform causes the size of objects to vary a great deal, which presents challenges for the generation of universal adversarial perturbation for multi-scale objects. To this end, we propose an adversarial attack method of object detection for remote sensing data. One of the key ideas of the proposed method is the novel optimization of the adversarial patch. We aim to attack as many objects as possible by formulating a joint optimization problem. Furthermore, we raise the scale factor to generate a universal adversarial patch that adapts to multi-scale objects, which ensures that the adversarial patch is valid for multi-scale objects in the real world. Extensive experiments demonstrate the superiority of our method against state-of-the-art methods on YOLO-v3 and YOLO-v5. In addition, we also validate the effectiveness of our method in real-world applications.

  • Research Article
  • Cite Count Icon 3
  • 10.1016/j.neunet.2025.107271
Advertising or adversarial? AdvSign: Artistic advertising sign camouflage for target physical attacking to object detector.
  • Jun 1, 2025
  • Neural networks : the official journal of the International Neural Network Society
  • Guangyu Gao + 3 more

Advertising or adversarial? AdvSign: Artistic advertising sign camouflage for target physical attacking to object detector.

  • Conference Article
  • Cite Count Icon 7
  • 10.1109/icodt255437.2022.9787422
Physical Adversarial Attack Scheme on Object Detectors using 3D Adversarial Object
  • May 24, 2022
  • Abeer Toheed + 3 more

Adversarial attacks are being frequently used these days to exploit different machine learning models including the deep neural networks (DNN) either during the training or testing stage. DNN under such attacks make the false predictions. Digital adversarial attacks are not applicable in physical world. Adversarial attack on object detection is more difficult as compared to the adversarial attack on image classification. This paper presents a physical adversarial attack on object detection using 3D adversarial objects. The proposed methodology overcome the constraint of 2D adversarial patches as they only work for certain viewpoints only. We have mapped an adversarial texture onto a mesh to create the 3D adversarial object. These objects are of various shapes and sizes. Unlike adversarial patch attacks, these adversarial objects are movable from one place to another. Moreover, application of 2D patch is limited to confined viewpoints. Experimentation results show that our 3D adversarial objects are free from such constraints and perform a successful attack on object detection. We used the ShapeNet dataset for different vehicle models. 3D objects are created using Blender 2.93 [1]. Different HDR images are incorporated to create the virtual physical environment. Moreover, we targeted the FasterRCNN and YOLO pre-trained models on the COCO dataset as our target DNN. Experimental results demonstrate that our proposed model successfully fooled these object detectors.

  • Research Article
  • Cite Count Icon 28
  • 10.1609/aaai.v34i01.5459
Beyond Digital Domain: Fooling Deep Learning Based Recognition System in Physical World
  • Apr 3, 2020
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Kaichen Yang + 4 more

Adversarial examples that can fool deep neural network (DNN) models in computer vision present a growing threat. The current methods of launching adversarial attacks concentrate on attacking image classifiers by adding noise to digital inputs. The problem of attacking object detection models and adversarial attacks in physical world are rarely touched. Some prior works are proposed to launch physical adversarial attack against object detection models, but limited by certain aspects. In this paper, we propose a novel physical adversarial attack targeting object detection models. Instead of simply printing images, we manufacture real metal objects that could achieve the adversarial effect. In both indoor and outdoor experiments we show our physical adversarial objects can fool widely applied object detection models including SSD, YOLO and Faster R-CNN in various environments. We also test our attack in a variety of commercial platforms for object detection and demonstrate that our attack is still valid on these platforms. Consider the potential defense mechanisms our adversarial objects may encounter, we conduct a series of experiments to evaluate the effect of existing defense methods on our physical attack.

  • Conference Article
  • 10.1109/aipr57179.2022.10092199
Adversarial Barrel! An Evaluation of 3D Physical Adversarial Attacks
  • Oct 11, 2022
  • Mohammad Zarei + 3 more

Computer vision models based on Deep Neural Networks (DNNs) are vulnerable to adversarial attacks. It has also been demonstrated that physical adversarial attacks can affect computer vision models though printed medium or physical 3D objects. However, the efficacy of physical adversarial attacks, is highly variable under real-world conditions. In this research, we leverage a synthetic validation environment to evaluate 2D and 3D physical adversarial attacks on state-of the-art object detection models (Faster-RCNN, RetinaNet, YOLOv3, YOLOv4). Using the Unreal Engine, we create synthetic environments to evaluate the limitations of physical adversarial attacks. We evaluate 2D adversarial patches under varying lighting conditions and poses. We optimize the same adversarial attacks for 3D shapes including a pyramid, a cube, and a barrel (cylinder), and evaluate the robustness of the 3D physical attacks against the 2D attack baseline. We test our attacks against object-detection models trained on MSCOCO, VIRAT, VISDRONE, and synthetic datasets. By advancing physical adversarial attacks and validation-methodology, we improve our ability to red-team computer vision models with a goal toward defending and assuring AI systems used in fields like Transportation, and Security.

  • Research Article
  • Cite Count Icon 3
  • 10.3390/s24196461
SSIM-Based Autoencoder Modeling to Defeat Adversarial Patch Attacks
  • Oct 6, 2024
  • Sensors
  • Seungyeol Lee + 3 more

Object detection systems are used in various fields such as autonomous vehicles and facial recognition. In particular, object detection using deep learning networks enables real-time processing in low-performance edge devices and can maintain high detection rates. However, edge devices that operate far from administrators are vulnerable to various physical attacks by malicious adversaries. In this paper, we implement a function for detecting traffic signs by using You Only Look Once (YOLO) as well as Faster-RCNN, which can be adopted by edge devices of autonomous vehicles. Then, assuming the role of a malicious attacker, we executed adversarial patch attacks with Adv-Patch and Dpatch. Trying to cause misdetection of traffic stop signs by using Adv-Patch and Dpatch, we confirmed the attacks can succeed with a high probability. To defeat these attacks, we propose an image reconstruction method using an autoencoder and the Structural Similarity Index Measure (SSIM). We confirm that the proposed method can sufficiently defend against an attack, attaining a mean Average Precision (mAP) of 91.46% even when two adversarial attacks are launched.

  • Research Article
  • Cite Count Icon 180
  • 10.1109/lcomm.2019.2901469
Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems
  • Jan 1, 2019
  • IEEE Communications Letters
  • Meysam Sadeghi + 1 more

We show that end-to-end learning of communication systems through deep neural network autoencoders can be extremely vulnerable to physical adversarial attacks. Specifically, we elaborate how an attacker can craft effective physical black-box adversarial attacks. Due to the openness (broadcast nature) of the wireless channel, an adversary transmitter can increase the block-error-rate of a communication system by orders of magnitude by transmitting a well-designed perturbation signal over the channel. We reveal that the adversarial attacks are more destructive than the jamming attacks. We also show that classical coding schemes are more robust than the autoencoders against both adversarial and jamming attacks.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/tensymp50017.2020.9230666
Physical Adversarial Attacks Against Deep Learning Based Channel Decoding Systems
  • Jan 1, 2020
  • 2020 IEEE Region 10 Symposium (TENSYMP)
  • Surabhi Ashok Babu + 1 more

Deep Learning (DL), in spite of its huge success in many new fields, is extremely vulnerable to adversarial attacks. We demonstrate how an attacker applies physical white-box and black-box adversarial attacks to Channel decoding systems based on DL. We show that these attacks can affect the systems and decrease performance. We uncover that these attacks are more effective than conventional jamming attacks. Additionally, we show that classical decoding schemes are more robust than the deep learning channel decoding systems in the presence of both adversarial and jamming attacks.

  • Research Article
  • Cite Count Icon 7
  • 10.1016/j.aeue.2022.154478
Adversarial attacks and active defense on deep learning based identification of GaN power amplifiers under physical perturbation
  • Dec 2, 2022
  • AEU - International Journal of Electronics and Communications
  • Yuqing Xu + 4 more

Adversarial attacks and active defense on deep learning based identification of GaN power amplifiers under physical perturbation

  • Research Article
  • Cite Count Icon 1
  • 10.1109/tpami.2025.3596462
Real-World Adversarial Defense Against Patch Attacks Based on Diffusion Model.
  • Dec 1, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Xingxing Wei + 6 more

Adversarial patches present significant challenges to the robustness of deep learning models, making the development of effective defenses become critical for real-world applications. This paper introduces DIFFender, a novel DIFfusion-based DeFender framework that leverages the power of a text-guided diffusion model to counter adversarial patch attacks. At the core of our approach is the discovery of the Adversarial Anomaly Perception (AAP) phenomenon, which enables the diffusion model to accurately detect and locate adversarial patches by analyzing distributional anomalies. DIFFender seamlessly integrates the tasks of patch localization and restoration within a unified diffusion model framework, enhancing defense efficacy through their close interaction. Additionally, DIFFender employs an efficient few-shot prompt-tuning algorithm, facilitating the adaptation of the pre-trained diffusion model to defense tasks without the need for extensive retraining. Our comprehensive evaluation, covering image classification and face recognition tasks, as well as real-world scenarios, demonstrates DIFFender's robust performance against adversarial attacks. The framework's versatility and generalizability across various settings, classifiers, and attack methodologies mark a significant advancement in adversarial patch defense strategies. Except for the popular visible domain, we have identified another advantage of DIFFender: its capability to easily expand into the infrared domain. Consequently, we demonstrate the good flexibility of DIFFender, which can defend against both infrared and visible adversarial patch attacks alternatively using a universal defense framework.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/aipr57179.2022.10092200
Detecting Physical Adversarial Patch Attacks with Object Detectors
  • Oct 11, 2022
  • Melanie Jutras + 4 more

Machine learning models are vulnerable to adversarial attacks which can cause integrity violations in real- world systems with machine learning components. Alarmingly, these attacks can also manifest in the physical world where an adversary can disrupt systems without gaining digital access. These attacks are becoming more concerning as safety-critical infrastructure such as healthcare and transportation increasingly rely on machine learning.This work is motivated by the need for safeguarding vision- based systems against physical adversarial pattern attacks—an important domain for autonomous vehicles. We propose the use of a separate detection module that can identify inputs that contain physical adversarial patterns. This approach allows for independent development of the defensive mechanism which can be updated without affecting the performance of the protected model. This methodology allows the model developers to focus on performance and leave security to a separate team. It is a practical approach that can provide security in cases where a model is acquired from a third party and cannot be re-trained.We perform experimentation demonstrating that we can detect unknown adversarial patterns with high accuracy using standard object detectors trained on datasets containing adversarial patches. A single detector is capable of detecting a variety of adversarial patterns trained from models with different datasets and tasks. Additionally, we introduce a new class of visually distinct adversarial patch attack we call GAN patches. Our experimentation shows that once observed the detection module can be updated to identify additional classes of patch attacks. Finally, we experiment with detectors trained trained on innocuous patches and examine how they can generalize to detecting a variety of known patch attacks.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-981-97-4581-4_21
An Improved Technique for Generating Effective Noises of Adversarial Camera Stickers
  • Jan 1, 2024
  • Satoshi Okada + 1 more

Cyber-physical systems (CPS) represent the integration of the physical world with digital technologies and are expected to change our everyday lives significantly. With the rapid development of CPS, the importance of artificial intelligence (AI) has been increasingly recognized. Concurrently, adversarial attacks that cause incorrect predictions in AI models have emerged as a new risk. They are no longer limited to digital data and now extend to the physical environment. Thus, they are pointed out to pose serious practical threats to CPS. In this paper, we focus on the “adversarial camera stickers attack,” a type of physical adversarial attack. This attack directly affixes adversarial noise to a camera lens. Since the adversarial noise perturbs various images captured by the camera, it must be universal. To realize more effective adversarial camera stickers, we propose a new method for generating more universal adversarial noise compared to previous research. We first reveal that the existing method for generating noise of adversarial camera stickers does not always lead to the creation of universal perturbations. Then, we address this drawback by improving the optimization problem. Furthermore, we implement our proposed method, achieving an attack success rate 2.5 times higher than existing methods. Our experiments prove the capability of our proposed method to generate more universal adversarial noises, highlighting its potential effectiveness in enhancing security measures against adversarial attacks in CPS.

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.neucom.2024.127431
Adversarial patch-based false positive creation attacks against aerial imagery object detectors
  • Feb 20, 2024
  • Neurocomputing
  • Guijian Tang + 4 more

Adversarial patch-based false positive creation attacks against aerial imagery object detectors

  • Research Article
  • Cite Count Icon 6
  • 10.34190/iccws.19.1.2044
Adversarial Camera Patch: An Effective and Robust Physical-World Attack on Object Detectors
  • Mar 21, 2024
  • International Conference on Cyber Warfare and Security
  • Kalibinuer Tiliwalidi + 3 more

Physical adversarial attacks present a novel and growing challenge in cybersecurity, especially for systems reliant on physical inputs for Deep Neural Networks (DNNs), such as those found in Internet of Things (IoT) devices. They are vulnerable to physical adversarial attacks where real-world objects or environments are manipulated to mislead DNNs, thereby threatening the operational integrity and security of IoT devices. The camera-based attacks are one of the most practical adversarial attacks, which are easy to implement and more robust than all the other attack methods, and pose a big threat to the security of IoT. This paper proposes Adversarial Camera Patch (ADCP), a novel approach that employs a single-camera patch to launch robust physical adversarial attacks against object detectors. ADCP optimizes the physical parameters of the camera patch using Particle Swarm Optimization (PSO) to identify the most adversarial configuration. The optimized camera patch is then attached to the lens to generate stealthy and robust adversarial samples physically. The effectiveness of the proposed approach is validated through ablation experiments in a digital environment, with experimental results demonstrating its effectiveness even under worst-case scenarios (minimal width, maximum transparency). Notably, ADCP exhibits higher robustness in both digital and physical domains compared to the baseline. Given the simplicity, robustness, and stealthiness of ADCP, we advocate for attention towards the ADCP framework as it offers a means to achieve streamlined, robust, and stealthy physical attacks. Our adversarial attacks pose new challenges and requirements for cybersecurity.

  • Conference Article
  • Cite Count Icon 5
  • 10.23919/iccas52745.2021.9650004
Camouflaged Adversarial Attack on Object Detector
  • Oct 12, 2021
  • Jeonghun Kim + 4 more

The existence of physical-world adversarial examples such as adversarial patches proves the vulnerability of real-world deep learning systems. Therefore, it is essential to develop efficient adversarial attack algorithms to identify potential risks and build a robust system. The patch-based physical adversarial attack has shown its effectiveness in attacking neural network-based object detectors. However, the generated patches are quite perceptible for humans, violating the fundamental assumption of adversarial examples. In this work, we present task-specific loss functions that can generate imperceptible adversarial patches based on camouflaged patterns. First, we propose a constrained optimization method with two camouflage assessment metrics to quantify camouflage performance. Then, we show the regularization with those metrics can help generate the adversarial patches based on camouflage patterns. Furthermore, we validate our methods with various experiments and show that we can generate natural-style camouflaged adversarial patches with comparable attack performance.

Save Icon
Up Arrow
Open/Close