Abstract
With the increasing diversity of visual tracking tasks, object tracking in RGB and thermal (RGB-T) modalities has received widespread interest. Most of the existing RGB-T tracking methods mainly improve tracking performance by integrating hierarchically complementary information from RGB and thermal modalities, however, they are insufficient in handling tracking failures due to the lack of re-detection capability. To address these issues, we propose a new RGB-T tracking method with online learning samples and adaptive object recovery. First, the features of RGB and thermal modalities are concatenated for robust appearance modeling. Second, a multimodal fusion strategy is designed to stably integrate reliable information of modalities and propose to use similarity to measure tracking confidence. Finally, a detector with online learning of positive and negative samples and adaptive recovery is developed to correct unreliable tracking results. Numerical results on five recent large-scale benchmark datasets demonstrate that the proposed tracker achieves competitive performance compared to other state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.