Abstract

Visual tracking is a complex problem due to unconstrained appearance variations and a dynamic environment. The extraction of complementary information from the object environment via multiple features and adaption to the target's appearance variations are the key problems of this paper. To this end, we propose a robust object tracking framework based on the unified graph fusion (UGF) of multicue to adapt to the object's appearance. The proposed cross-diffusion of sparse and dense features not only suppresses the individual feature deficiencies but also extracts the complementary information from multicue. This iterative process builds robust unified features which are invariant to object deformations, fast motion, and occlusion. Robustness of the unified feature also enables the random forest classifier to precisely distinguish the foreground from the background, adding resilience to background clutter. In addition, we present a novel kernel-based adaptation strategy using outlier detection and a transductive reliability metric. The adaptation strategy updates the appearance model to accommodate variations in scale, illumination, and rotation. Both qualitative and quantitative analyses on benchmark video sequences from OTB-50, OTB-100, VOT2017/18, and UAV123 show that the proposed UGF tracker performs favorably against 18 other state-of-the-art trackers under various object tracking challenges.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.