Abstract

Deeply understanding machine learning is a challenging task because its backbone technology is still obscure. The adversarial attack is established toward the vulnerability of deep models and can be utilized for understanding deep neural networks and enhancing model robustness. Adversarial attacks on tracking models can generate imperceptible noise to an input and consequently increase the cumulative error of tracking displacement. In general, existing adversarial visual-attacking models are white-box attacks. Although it successfully deceives the baseline tracker, the training model’s attacking effect is insufficiently transferred to blind trackers. Here, we present a new approach, the diminishing-feature attack, that generates a subtle perturbation into the input frame. The malicious noise in the proposed model is generated to disturb the feature heatmap to distract the classification score and annihilate bounding-box prediction. The adversarial noise generator is solely trained from SiamRPN++ ResNet features of template frames without search frames or other parameters. Our method requires fewer computing resources than other visual tracking attackers when malfunctioning visual trackers. To validate the proposed model, we benchmark the adversarial input frame with the tracker on four tracking datasets: OTB100, VOT2018, LaSOT and UAV123. Our attacking method achieves a high performance drop that is comparable to the other existing adversarial attackers in visual tracking, yet our adversarial noise is lower than the aforementioned attackers. Furthermore, the transferability attacking of our algorithm is excellent on state-of-the-art visual tracking, for example, SiamRPN, DaSiam and DiMP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call