Abstract
Visual tracking is highly challenged by factors such as occlusion, background clutter, an abrupt target motion, illumination variation, and changes in scale and orientation. In this paper, an integrated framework for online learning of a fused temporal appearance and spatial constraint models for robust and accurate visual target tracking is proposed. The temporal appearance model aims to encapsulate historical appearance information of the target in order to cope with variations due to illumination changes and motion dynamics. On the other hand, the spatial constraint model exploits the relationships between the target and its neighbors to handle occlusion and deal with a cluttered background. For the purposes of reducing the computational complexity of the state estimation algorithm and in order to emphasize the importance of the different basis vectors, a K-nearest Local Smooth Algorithm (KLSA) is used to describe the spatial state model. Further, a customized Accelerated Proximal Gradient (APG) method is implemented for iteratively obtaining an optimal solution using KLSA. Finally, the optimal state estimate is obtained by using weighted samples within a particle filtering framework. Experimental results on large-scale benchmark sequences show that the proposed tracker achieves favorable performance compared to state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.