Abstract

This paper presents anonline learning method to enhance the robustness of the tracker for visual object tracking. Both handcrafted and non-handcrafted features are considered. These features include, histogram of oriented gradients (HOG), color names, saliency, and feature maps from convolutional neural network (CNN) layers. The objective is to model both the object and its surrounding background using a background-aware correlation filter (BACF). In this paper, different handcrafted and non-handcrafted features are considered with the BACF framework to independently compute the new locations of objects. After computing response maps produced using different features, these maps are combined, effectively. We have tested the performance of the proposed method on challenging image sequences, and it showed robustness for all of tested videos.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call