Abstract

Recent Siamese network-based visual tracking approaches have achieved high performance metrics on numerous recent visual tracking benchmarks, where most of these trackers employ a backbone feature extractor network with a prediction head network for classification and regression tasks. However, there has been a constant trend of employing a larger and complex backbone network and prediction head networks for improved performance, where increased computational load can slow down the overall speed of the tracking algorithm. To address the aforementioned issues, we propose a novel target-aware feature bottleneck module for trackers, where the proposed bottleneck can elicit a target-aware feature in order to obtain a compact feature representation from the backbone network for improved speed and robustness. Our lightweight target-aware bottleneck module attends to the feature representation of the target region to elicit scene-specific information and generate feature-wise modulation weights that can adaptively change the importance of each feature. The proposed tracker is evaluated on large-scale visual tracking datasets, GOT-10k and LaSOT, and we achieve real-time speed in terms of computation and obtain improved accuracy over the baseline tracker algorithm with high performance metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call