Abstract

Visual object tracking is a complicated problem due to varied challenges such as occlusion, background clutter, target appearance changes, abrupt target motion, and illumination variations. Different tracking algorithms perform well on different challenges because of their unique strengths. In this paper, a dual-integration framework that integrates the strengths of multiple models while avoiding their weaknesses is proposed. In the proposed framework, several evaluation criteria and multiple models updated via different processes are combined. To distinguish the target from the distractors caused by different criteria, the motion dynamic model and the forward-backward analysis is introduced to provide information that is complementary to appearance information. In addition, for discriminating occlusion, a spatial-temporal occlusion-aware approach is further proposed. The detected occlusion results are applied in avoiding contamination of the appearance model. Extensive experiments on multiple benchmarks demonstrate that this proposed method improves the overlap rate of a hand-crafted feature-based tracker with relative gains of 4%, 1.6%, 1.9%, 2.2% and 8.5% on OTB-2015, OTB-2013, Temple-Color, UAV123 and UAV20L, respectively. Also, the experimental results demonstrate that our approach outperforms a deep feature-based tracker in overlap rate by 1.4%, 1.6%, 2.3%, 2.5% and 3.5% on OTB-2015, OTB-2013, Temple-Color, UAV123 and UAV20L, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call