Abstract
Visual tracking methods are mostly based on single stage state estimation that limitedly caters to precise localization of target under dynamic environment such as occlusion, object deformation, rotation, scaling and cluttered background. In order to address these issues, we introduce a novel multi-stage coarse-to-fine tracking framework with quick adaptation to environment dynamics. The key idea of our work is to propose two-stage estimation of object state and to develop an adaptive fusion model. Coarse estimation of object state is achieved using optical flow and multiple fragments are generated around this approximation. Precise localization of object is obtained through evaluation of these fragments using three complementary cues. Adaptation of proposed tracker to dynamic environment changes is quick due to incorporation of context sensitive cue reliability, which encompass its direct application for development of expert system for video surveillance. In addition, proposed framework caters to object rotation and scaling through a random walk state model and rotation invariant features. The proposed tracker is evaluated over eight- benchmarked color video sequences and competitive results are obtained. As an average of the outcomes, we achieved mean center location error (in pixels) of 6.791 and F-measure of 0.78. Results demonstrate that proposed tracker not only outperforms various state-of-the-art trackers but also effectively caters to various dynamic environments.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.