Abstract

Correlation filtering (CF) has emerged as one of the best object-tracking frameworks, achieving a good balance between tracking accuracy and speed. Based on this framework, many successful trackers have been developed. We propose a new co-saliency regularization method for visual object tracking based on the correlation filter, called the CRCF. To the best our knowledge, this is the first application of co-saliency regularization to CF-based tracking, in the context of widely used co-saliency detection and regularization methods. In the CRCF, the co-saliency information is extracted and introduced into the regularization component in the relevant tracking framework. The model first calculates the salient regions based on the feature differences between salient objects and background information. A co-saliency map is generated for the global correspondence of implicit learning between multiple images. To avoid traversing all the pixels, we use SLIC superpixel clustering to extract the objects saliency information, thus significantly reducing the computational overhead in the image processing step while retaining the image features and structural information. Finally, during the tracking process, it is possible to dynamically learn the co-saliency map configuration, highlight relevant object areas, and mitigate the impact of the lack of discriminatory representations on the performance. Quantitative and qualitative experimental results on OTB-2015, UAV123, LaSOT, GOT-10k and VOT-2018 datasets show that our CRCF outperforms the latest trackers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call