Abstract

Machine learning can categorize various algorithms, for example, classification model, object detection, segmentation and visual tracking. The adversarial attack comes up to addresses the generalization weakness which learns to anticipate the deep neural network by causing model malfunction. Although the adversarial attack models can effectively suppress the performance of the classification model, there are narrow attacking models in visual tracking because the tracking model is different from the generalization model. In this paper, we propose a feature-based attack model that mildly generates perturbation in the template region with two loss functions: minimizing malicious noise and interfering with the heatmap feature which gradually forces the tracker into the erroneous position. We train our proposed attacking model based on SiamRPN++. Our training model requires only a template image which is sufficient for attacking the tracker and leads to the training resource decreasing. In the experiment section, we benchmark our model with OTB100, VOT2018, LaSOT and UAV123 datasets. Our model can successfully deceive the tracker. It is superior to the other existing adversarial attack models in visual tracking. Besides, our noise generator is the fastest model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call