Abstract
Physical adversarial attacks face significant challenges in achieving transferability across different object detection models, especially in real-world conditions. This is primarily due to variations in model architectures, training data, and detection strategies, which can make adversarial examples highly model-specific. This study introduces a multi-model adversarial training approach to improve the transferability of adversarial textures across diverse detection models, including one-stage, two-stage, and transformer-based architectures. Using the Truck Adversarial Camouflage Optimization (TACO) framework and a novel combination of YOLOv8n, YOLOv5m, and YOLOv3 models for optimization, our approach achieves an AP@0.5 detection score of 0.0972—over 50% lower than textures trained on single models alone. This result highlights the importance of multi-model training in enhancing attack effectiveness across object detectors, contributing to improved adversarial effectiveness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.