Abstract

Visual object tracking is a complicated problem due to varied challenges such as occlusion, background clutter, target appearance changes, abrupt target motion, and illumination variations. Different tracking algorithms perform well on different challenges because of their unique strengths. In this paper, a dual-integration framework that integrates the strengths of multiple models while avoiding their weaknesses is proposed. In the proposed framework, several evaluation criteria and multiple models updated via different processes are combined. To distinguish the target from the distractors caused by different criteria, the motion dynamic model and the forward-backward analysis is introduced to provide information that is complementary to appearance information. In addition, for discriminating occlusion, a spatial-temporal occlusion-aware approach is further proposed. The detected occlusion results are applied in avoiding contamination of the appearance model. Extensive experiments on multiple benchmarks demonstrate that this proposed method improves the overlap rate of a hand-crafted feature-based tracker with relative gains of 4%, 1.6%, 1.9%, 2.2% and 8.5% on OTB-2015, OTB-2013, Temple-Color, UAV123 and UAV20L, respectively. Also, the experimental results demonstrate that our approach outperforms a deep feature-based tracker in overlap rate by 1.4%, 1.6%, 2.3%, 2.5% and 3.5% on OTB-2015, OTB-2013, Temple-Color, UAV123 and UAV20L, respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.