Abstract

Visual object tracking plays a significant role in our daily life such as intelligent transportation and surveillance. However, an accurate and robust object tracker is hard to be obtained as target objects often go through huge appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, we combine features extracted from deep convolutional neural networks pretrained on object recognition datasets with color name features and histogram of oriented gradient features skillfully to improve tracking accuracy and robustness. The outputs of the convolutional layers encode the senior semantic information of targets and such representations are robust to great appearance variations while their spatial resolution is too coarse to precisely locate targets. In contrast, color name features connected at the back of HOG features could provide more precise localization but are less invariant to appearance changes. We first infer the response of the convolutional features and HOG-CN features respectively, then make a linear combination of them. The maximum value of the result could represent the accurate localization of the target. We not only compare the tracking results of adopting a single feature alone, showing that the performance of them is inferior to ours, but also analyze the effect of exploiting features extracted from different convolutional layers on the tracking performance. What’s more, we introduce the adaptive target response map in our tracking algorithm to keep the target from drifting as much as possible. Extensive experimental results on a large scale benchmark dataset illustrates outstanding performance of the proposed algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.