Abstract

In this paper, we present a novel appearance variation prediction model which can be embedded into the existing generative appearance model based tracking framework. Different from the existing works, which online learn appearance model with obtained tracking results, we propose to predict appearance reconstruction error. We notice that although the learned appearance model can precisely describe the target in the previous frames, the tracking result is still not accurate if in the following frame, the patch that is most similar to appearance model is assumed to be the target. We first investigate the above phenomenon by conducting experiments on two public sequences and discover that in most cases the best target is not the one with minimal reconstruction error. Then we design three kinds of features which can encode motion, appearance, appearance reconstruction error information of target's surrounding image patches, and capture potential factors that may cause variations of target's appearance as well as its reconstruction error. Finally, with these features, we learn an effective random forest for predicting reconstruction error of the target during tracking. Experiments on various datasets demonstrate that the proposed method can be combined with many existing trackers and improve their performances significantly.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.