Abstract

Object tracking based on sparse representation formulates tracking as searching the candidate with minimal reconstruction error in target template subspace. The key problem lies in modeling the target robustly to vary appearances. The appearance model in most sparsity-based trackers has two main problems. The first is that global structural information and local features are insufficiently combined because the appearance is modeled separately by holistic and local sparse representations. The second problem is that the discriminative information between the target and the background is not fully utilized because the background is rarely considered in modeling. In this study, we develop a robust visual tracking algorithm by modeling the target as a model for discriminative sparse appearance. A discriminative dictionary is trained from the local target patches and the background. The patches display the local features while their position distribution implies the global structure of the target. Thus, the learned dictionary can fully represent the target. The incorporation of the background into dictionary learning also enhances its discriminative capability. Upon modeling the target as a sparse coding histogram based on this learned dictionary, our tracker is embedded into a Bayesian state inference framework to locate a target. We also present a model update scheme in which the update rate is adjusted automatically. In conjunction with the update strategy, the proposed tracker can handle occlusion and alleviate drifting. Comparative results on challenging benchmark image sequences show that the tracking method performs favorably against several state-of-the-art algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call