Abstract
This paper proposes a robust visual tracking method by designing a collaborative model. The collaborative model employs a two-stage tracker and a HOG-based detector, which exploits both holistic and local information of the target. The two-stage tracker learns a linear classifier from the patches of original images and the HOG-based detector trains a linear discriminant analysis classifier with the object exemplar. Finally, a result decision making strategy is developed by considering both the original template and the appearance variations, making the tracker and the detector collaborate with each other. The proposed method has been evaluated on OTB-50, OTB-100 and Temple-Color datasets, and results demonstrate that the proposed method is able to effectively address the challenging cases such as scale variation and out-of-view and gets better performance than the state-of-the-art trackers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.