Abstract

Unmanned Aerial Vehicle (UAV) Videos have received active research attention in the remote sensing field by taking full advantage of the bird’s eye view offered by UAV. At the same time, visual tracking approaches based on discriminative correlation filters (DCF) have recently achieved increasing popularity and success. Despite their success, the tracking robustness and accuracy of existing DCF-based trackers are hard to promote in challenging tracking scenarios of aerial videos due to excessive reliance on the response map and fixed linear model update strategy. To resolve these issues, we propose a robust DCF-based tracking framework via an effective pretrained rectification network for UAV-based remote sensing. Specifically, the target-specific rectification network is offline trained to discriminatively classify the target and background. During the online tracking stage, the DCF module performs fast inference to obtain the potential locations of the target. After that, the deep rectification network evaluates the correlation-specific proposals offered by the DCF module and provides precise tracking results. Besides, to achieve a robust and adaptive model update strategy, we propose to finetune both the DCF module and rectification network according to the classification confidence of the estimated result. Extensive experimental results on recent UAV benchmarks demonstrate that our method achieves better performance than other competing algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call