Abstract

Discriminative dictionary learning (DDL) provides an appealing paradigm for appearance modeling in visual tracking due to its superior discrimination power. However, most existing DDL based trackers usually cannot handle the drastic appearance changes, especially for scenarios with background cluster and/or similar object interference. One reason is that they often encounter loss of subtle visual information that is critical to distinguish the object from the distracters. In this paper, we propose a robust tracker via jointly learning a multi-class discriminative dictionary. Our DDL method exploits concurrently the intra-class visual information and inter-class visual correlations to learn the shared dictionary and the class-specific dictionaries. By imposing several discrimination constraints into the objective function, the learnt dictionary is reconstructive, compressive and discriminative, thus can achieve better discriminate the object from the background. Tracking is carried out within a Bayesian inference framework where the joint decision measure is used to construct the observation model. Evaluations on the benchmark dataset demonstrate that the proposed algorithm achieves substantially better overall performance against the state-of-the-art trackers.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.