Abstract

Spatio-temporal context (STC) based visual tracking algorithms have demonstrated remarkable tracking capabilities in recent years. In this paper, we propose an improved STC method, which seamlessly integrates capabilities of the powerful feature representations and mappings from the convolutional neural networks (CNNs) based on the theory of transfer learning. Firstly, the dynamic training confidence map, obtained from a mapping neural network using transferred CNN features, rather than the fixed training confidence map is utilized in our tracker to adapt the practical tracking scenes better. Secondly, we exploit hierarchical features from both the original image intensity and the transferred CNN features to construct context prior models. In order to enhance the accuracy and robustness of our tracker, we simultaneously transfer the fine-grained and semantic features from deep networks. Thirdly, we adopt the training confidence index (TCI), reflected from the dynamic training confidence map, to guide the updating process. It can determine whether back propagations should be conducted in the mapping neural network, and whether the STC model should be updated. The introduction of the dynamic training confidence map could effectively deal with the problem of location ambiguity further in our tracker. Overall, the comprehensive experimental results illustrate that the tracking capability of our tracker is competitive against several state-of-the-art trackers, especially the baseline STC tracker, on the existing OTB-2015 and UAV123 visual tracking benchmarks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.