Abstract

Visual tracking in condition of occlusion, appearance or illumination change has been a challenging task over decades. Recently, some online trackers, based on the detection by classification framework, have achieved good performance. However, problems are still embodied in at least one of the three aspects: 1) tracking the target with a single region has poor adaptability for occlusion, appearance or illumination change; 2) lack of sample weight estimation, which may cause overfitting issue; and 3) inadequate motion model to prevent target from drifting. For tackling the above problems, this paper presents the contributions as follows: 1) a novel part-based structure is utilized in the online AdaBoost tracking; 2) attentional sample weighting and selection is tackled by introducing a weight relaxation factor, instead of treating the samples equally as traditional trackers do; and 3) a two-stage motion model, multiple parts constraint, is proposed and incorporated into the part-based structure to ensure a stable tracking. The effectiveness and efficiency of the proposed tracker is validated upon several complex video sequences, compared with seven popular online trackers. The experimental results show that the proposed tracker can achieve increased accuracy with comparable computational cost.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call