Abstract

Correlation filters and deep learning methods are the two mainly directions in the research of visual tracking. However, these trackers do not balance accuracy and speed very well at the same time. The application of the Siamese networks brings great improvement in accuracy and speed, and an increasing number of researchers are paying attention to this aspect. In the paper, based on the Siamese networks model, we propose a robust adaptive learning visual tracking algorithm. HOG features, CN features and deep convolution features are extracted from the template frame and search region frame respectively, and we analyze the merits of each feature and perform feature adaptive fusion to improve the validity of feature representation. Then, we update the two branch models with two learning change factors and realize a more similar match to locate the target. Besides, we propose a model update strategy that employs the average peak-to-correlation energy (APCE) to determinate whether to update the learning change factors to improve the accuracy of tracking model and reduce the tracking drift in the case of tracking failure, deformation or background blur etc. Extensive experiments on the benchmark datasets (OTB-50, OTB-100) demonstrate that our visual tracking algorithm performs better than several state-of-the-art trackers for accuracy and robustness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.