Abstract

This paper proposes a robust visual tracking method by designing a collaborative model. The collaborative model employs a two-stage tracker and a HOG-based detector, which exploits both holistic and local information of the target. The two-stage tracker learns a linear classifier from the patches of original images and the HOG-based detector trains a linear discriminant analysis classifier with the object exemplar. Finally, a result decision making strategy is developed by considering both the original template and the appearance variations, making the tracker and the detector collaborate with each other. The proposed method has been evaluated on OTB-50, OTB-100 and Temple-Color datasets, and results demonstrate that the proposed method is able to effectively address the challenging cases such as scale variation and out-of-view and gets better performance than the state-of-the-art trackers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call