Abstract

We propose a visual tracking method using multiple Hough detectors to address the problem of long-term robust object tracking in unconstrained environments. The method constructs the detectors based on the feature selection by the mutual information. These detectors serve to learn the partial appearances of target and synchronously evaluate image locations via the voting based detection with the generalized Hough transform. According to the result of detections, the best detector is selected by the minimum entropy criterion and delivers the final hypotheses for target location. The feature selection allows our tracker to be able to obtain and use the most discriminative parts of target and thus more robust to its changes, e.g. occlusion and deformation. The detector selection can correct undesirable model updates and restore the tracker after tracking failure. Meanwhile, the Hough-based detection can reduce the amount of noise introduced during online self-training and thus effectively prevent the tracker from drifting. The method is evaluated on the CVPR2013 Visual Tracker Benchmark and the experimental results demonstrate our method outperforms other tracking algorithms in terms of both success rate and precision.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.