Abstract

The sparse representation-based trackers has attracted much attention in the research community due to its superior performance in spite of its computational complexity. But the assumption that the coding residual follows either the Gaussian or the Laplacian distribution may not accurately describe the coding residual in practical visual tracking scenarios. To deal with such issues as well as to improve the performance of the visual tracking, a novel generative tracker is proposed in a Bayesian inference framework by introducing robust coding (RC) into the PCA reconstruction. Also, it is proposed to collaborate the global and local PCA subspace appearance models to enhance the tracking performance. Further, a robust RC distance is proposed to differentiate the candidate samples from the subspace, and a novel observation likelihood is defined based on both global and local RC distances. In addition, a robust occlusion map generation and a novel appearance model update mechanism are proposed. The quantitative and qualitative performance evaluations on the OTB-50 and VOT2016 dataset demonstrate that the proposed method performs favorably against several methods based on particle filter framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call