Abstract

In this paper, robust visual tracking scheme is achieved through a novel sparse tracking via collaborative motion and appearance (TCMA). A coarse-to-fine framework with both motion and holistic appearance information is taken into consideration. In coarse search, we employ an optical flow map for the generation of motion particles. A rough estimation of target image patch is obtained using $l_2$ -regularized least square method in coarse search stage. In fine search, a novel smooth term is proposed in the cost function to improve the robustness of the tracker. With this smooth term, the object appearance in the previous frame will also affect the calculation of sparse coefficient in the current frame. It allows the tracker involving temporal information between consecutive frames instead of only considering single frame appearance information as in the conventional sparse coding-based tracking algorithms. In order to reserve the original and latest appearance information simultaneously in the template, a quadratic-function-like weight allocation scheme combining with particle contributed histogrammic correlation is developed in the updating stage. Both qualitative and quantitative studies are conducted on a set of challenging image sequences. The superior performance over other state-of-the-art algorithms is verified through the experiment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call