Abstract

Discriminative correlation filters (DCF) have achieved much success in visual tracking. However, most of them simply rely on the features extracted by the last layer of the backbone, while ignoring the low-level rich structural information. In addition, they normally minimize the tailored objective functions to predict the target model in a direct way, which introduces inductive bias and limits the expressivity of the trackers. In view of this, a pyramidal feature fusion module is proposed in this paper to integrate the low-resolution, semantically strong features with high-resolution, semantically weak features. Then, an asymmetric Transformer structure is applied to predict the weights of the model. Finally, a feature refinement module is employed to optimize the search features. Extensive experiments on 5 mainstream datasets demonstrate the superiority of our tracker, where it has achieved better feature expression and more precise target localization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call