Abstract
Recently, infrared object detection (IOD) has been extensively studied due to the rapid growth of deep neural networks (DNNs). An adversarial attack using imperceptible perturbation can dramatically deteriorate the performance of DNNs. Most of the existing adversarial attacks are focused on visible image recognition (VIR), but there are few attacks for IOD. Moreover, the existing attacks are challenging to exploit for state-of-the-art detectors (e.g., EfficientDet) due to low compatibility. To solve the problem, we propose a novel upcycling adversarial attack for IOD by expanding the highly compatible adversarial attacks for the VIR task. We also propose a novel evaluation metric, attack efficiency (AE), to compare the effectiveness of different adversarial attacks. Since the AE value increases with the small perturbation size and the significant performance drop, we can concurrently compare the similarity and performance degradation between adversarial and clean images for various attacks. We validate our approaches through comprehensive experiments on two challenging datasets (e.g., FLIR and MSOD) for the infrared domain.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.