Abstract

In this paper, we mainly deal with the problems of long-term visual tracking while the target objects undergo sophisticated scenarios such as occlusion, out-of-view, and scale changes. We employ two discriminative correlation filters (DCFs) for achieving long-term object tracking, which is performed by learning a spatial–temporal context correlation filter for translation estimation. As for the scale estimation, which is achieved by learning a scale DCF centered on the estimated target position to estimate scale from the best confident results. In addition, we proposed an efficient model update and redetecting activate strategy to avoid unrecoverable drift due to noisy updates, and achieve robust long-term tracking in the case of tracking failure. We evaluate our algorithm carry on OTB benchmark datasets, and the tracking results of both qualitative and quantitative evaluations on challenging sequences demonstrate that the proposed algorithm performs superiorly against several state-of-the-art DCFs methods including some methods which follow deep learning paradigm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.