Abstract

Visual tracking is a fundamental and important problem in computer vision and pattern recognition. Existing visual tracking methods usually localize the visual object with a bounding box. Recently, learning the patch-based weighted features has been demonstrated to be an effective way to mitigate the background effects in the target bounding box descriptions, and can thus improve tracking performance significantly. In this paper, we propose a simple yet effective approach, called Laplacian Regularized Random Walk Ranking (LRWR), to learn more robust patch-based weighted features of the target object for visual tracking. The main advantages of our LRWR model over existing methods are: (1) it integrates both local spatial and global appearance cues simultaneously, and thus leads to a more robust solution for patch weight computation; (2) it has a simple closed-form solution, which makes our tracker efficiently. The learned features are incorporated into the structured SVM to perform object tracking. Experiments show that our approach performs favorably against the state-of-the-art trackers on two standard benchmark datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.