Abstract

Adversarial attack reveals a potential imperfection in deep models that they are susceptible to being tricked by imperceptible perturbations added to images. Recent deep multi-object trackers combine the functionalities of detection and association, rendering attacks on either the detector or the association component an effective means of deception. Existing attacks focus on increasing the frequency of ID switching, which greatly damages tracking stability, but is not enough to make the tracker completely ineffective. To fully explore the potential of adversarial attacks, we propose Blind-Blur Attack (BBA), a novel attack method based on spatio-temporal motion information to fool multi-object trackers. Specifically, a simple but efficient perturbation generator is trained with the blind-blur loss, simultaneously making the target invisible to the tracker and letting the background be regarded as moving targets. We take TraDeS as our main research tracker, and verify our attack method on other excellent algorithms (i.e., CenterTrack, FairMOT, and ByteTrack) on MOT-Challenge benchmark datasets (i.e., MOT16, MOT17, and MOT20). BBA attack reduced the MOTA of TraDeS and ByteTrack from 69.1 and 80.3 to −238.1 and −357.0, respectively, indicating that it is an efficient method with a high degrees of transferability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call