Abstract

Recently, adversarial patches have been used successfully, to fool object detectors by hiding a specific or suppress almost all relevant detections in an image. Although there are various ways to harden against or identify those attacks in the visual spectrum, there is only a small fraction that actually evaluates these mechanisms on thermal infrared input data. Thermal infrared object detectors and classifiers cannot be fooled with pixel optimized adversarial patches, but they are still prone to Gaussian function patches. This paper (I) investigates two methods for hardening real-time infrared object detectors against adversarial patches. One of these methods is our novel (II) APMD, an extension of an already existing adversarial robustness mechanism, that relies on (unsupervised) adversarial training, to clear adversarial patches for deep learning object detectors in the infrared spectrum. We therefore (III) generate adversarial patches, that fool object detectors in the infrared spectrum in three different ways, and evaluate them with real-world data, recorded with the experimental platform MODISSA. Our results show, that the hardened system is fast enough to be used in a real-time environment and successfully detects and inhibits adversarial attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call