Abstract

With the increasing diversity of visual tracking tasks, object tracking in RGB and thermal (RGB-T) modalities has received widespread interest. Most of the existing RGB-T tracking methods mainly improve tracking performance by integrating hierarchically complementary information from RGB and thermal modalities, however, they are insufficient in handling tracking failures due to the lack of re-detection capability. To address these issues, we propose a new RGB-T tracking method with online learning samples and adaptive object recovery. First, the features of RGB and thermal modalities are concatenated for robust appearance modeling. Second, a multimodal fusion strategy is designed to stably integrate reliable information of modalities and propose to use similarity to measure tracking confidence. Finally, a detector with online learning of positive and negative samples and adaptive recovery is developed to correct unreliable tracking results. Numerical results on five recent large-scale benchmark datasets demonstrate that the proposed tracker achieves competitive performance compared to other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call