Abstract

Discriminative dictionary learning (DDL) provides an appealing paradigm for appearance modeling in visual tracking due to its superior discrimination power. However, most existing DDL based trackers usually cannot handle the drastic appearance changes, especially for scenarios with background cluster and/or similar object interference. One reason is that they often encounter loss of subtle visual information that is critical to distinguish the object from the distracters. In this paper, we propose a robust tracker via jointly learning a multi-class discriminative dictionary. Our DDL method exploits concurrently the intra-class visual information and inter-class visual correlations to learn the shared dictionary and the class-specific dictionaries. By imposing several discrimination constraints into the objective function, the learnt dictionary is reconstructive, compressive and discriminative, thus can achieve better discriminate the object from the background. Tracking is carried out within a Bayesian inference framework where the joint decision measure is used to construct the observation model. Evaluations on the benchmark dataset demonstrate that the proposed algorithm achieves substantially better overall performance against the state-of-the-art trackers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call